Veritas InfoScale™ 7.3.1 Release Notes - Linux
- Introduction
- Changes introduced in 7.3.1
- Changes related to installation and upgrades
- Changes related to the Cluster Server engine
- Changes related to Cluster Server agents
- Changes related to InfoScale in cloud environments
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to replication
- Changes related to Dynamic Multipathing
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Issues related to Veritas InfoScale Storage in Amazon Web Services cloud environments
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Application isolation feature known Issues
- Cloud deployment known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail [3761497]
Issue 1:
When the vradmin ibc command is used to take a snapshot of a replicated data volume containing a VxFS file system on the Secondary, mounting the snapshot volume in read-write mode may fail with the following error:
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/dg/snapshot_volume is corrupted. needs checking
This happens because the file system may not be quiesced before running the vradmin ibc command and therefore, the snapshot volume containing the file system may not be fully consistent.
Issue 2:
After a global clustering site failover, mounting a replicated data volume containing a VxFS file system on the new Primary site in read-write mode may fail with the following error:
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/dg/data_volume is corrupted. needs checking
This usually happens because the file system was not quiesced on the original Primary site prior to the global clustering site failover and therefore, the file systems on the new Primary site may not be fully consistent.
Workaround: The following workarounds resolve these issues.
For issue 1, run the fsck command on the snapshot volume on the Secondary, to restore the consistency of the file system residing on the snapshot.
For example:
# fsck -t vxfs /dev/vx/dsk/dg/snapshot_volume
For issue 2, run the fsck command on the replicated data volumes on the new Primary site, to restore the consistency of the file system residing on the data volume.
For example:
# fsck -t vxfs /dev/vx/dsk/dg/data_volume