Veritas InfoScale™ 7.2 Release Notes - Linux
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.2
- Changes related to Veritas Cluster Server
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Replication
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Issues related to Veritas InfoScale Storage in Amazon Web Services cloud environments
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Application isolation feature known Issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Technology Preview: Erasure coding in Veritas InfoScale storage environments
Erasure coding is a new feature available as a technology preview in Veritas InfoScale for configuration and testing in non-production environments. It is supported in DAS, SAN, FSS, and standalone environments.
Erasure coding offers a robust solution in redundancy and fault tolerance for critical storage archives. In erasure coding, data is broken into fragments, expanded and encoded with redundant data pieces and stored across different locations or storage media. When one or more disks fail, the data on failed disks is reconstructed using the parity information in the encoded disks and data in the surviving disks.
Erasure coding can be used to provide fault tolerance against disk failures in single node (DAS/SAN) or shared cluster (SAN) setups where all nodes share the same storage. In such environments, erasure coded volumes are configured across a set of independent disks.
In FSS distributed environments, where the storage is directly attached to each node, erasure coded volumes provide fault tolerance against node failures. You can create erasure coded volumes using storage from different nodes such that encoded data fragments are stored on different nodes for redundancy.
For more information see the InfoScale storage administration guides.