Veritas InfoScale™ 7.2 Release Notes - Linux
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.2
- Changes related to Veritas Cluster Server
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Replication
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Issues related to Veritas InfoScale Storage in Amazon Web Services cloud environments
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Application isolation feature known Issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Hot-relocation in FSS environments
In FSS environments, hot-relocation employs a policy-based mechanism for healing storage failures. Storage failures may include disk media failure or node failures that render storage inaccessible. This mechanism uses tunables to determine the amount of time that VxVM waits for the storage to come online before initiating hot-relocation. If the storage fails to come online within the specified time interval, VxVM relocates the failed disk.
VxVM uses the following tunables:
storage_reloc_timeout | Specifies the time interval in minutes after which VxVM initiates hot-relocation when storage fails. |
node_reloc_timeout | Specifies the time interval in minutes after which VxVM initiates hot-relocation when a node fails. |
The default value for the tunables is 30 minutes. You can modify the tunable value to suit your business needs. In the current implementation, VxVM does not differentiate between disk media and node failures. As a result, both tunables will have the same value. For example, if you set the value of thestorage_reloc_timeout tunable to 15, then VxVM will set the value of thenode_reloc_timeout tunable also to 15. Similarly, if you set thenode_reloc_timeout tunable to a specific value, VxVM sets the same value for the storage_reloc_timeout tunable. You can use the vxtune command to view or update the tunable settings.
The hot-relocation process varies slightly for DAS environments as compared to shared environments. When a DAS disk fails, VxVM attempts to relocate the data volume along with its associated DCO volume (even though the DCO may not have failed) to another disk on the same node for performance reasons. During relocation, VxVM gives first preference to available spare disks, failing which VxVM looks for eligible free space.
Hot-relocation in FSS environments is supported only on new disk groups created with disk group version 230. Existing disk groups cannot be used for relocation.
For more information see the InfoScale storage administration guides.