Veritas InfoScale™ 7.1 Release Notes - Linux
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Not supported in this release
- Technology Preview: Application isolation in CVM environments with disk group sub-clustering
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Behavioral changes in a disk group sub-cluster
Some operations in a disk group sub-cluster differ in their behavior from traditional CVM environments.
Table: Behavioral changes in a disk group sub-cluster lists the behavioral changes in a disk group sub-cluster for certain operations.
Table: Behavioral changes in a disk group sub-cluster
Operation | Behavioral change |
---|---|
Auto-import of shared disk groups | Shared disk groups are not imported by default when the CVM cluster starts. They must be manually imported on the nodes. The CVM cluster starts successfully even if there is no storage for some disk groups in the cluster. |
Adding or deleting shared disk groups to and from a cluster configuration (cfsdgadm) | Shared disk groups can be auto-imported on some nodes in the cluster. The disk groups that are required for a cluster file system environment are automatically imported by VCS when the cluster starts. |
Creating shared disk groups | When you create a shared disk group, it is imported only on the node on which you run the command. |
Importing shared disk groups | Importing a shared disk group using the vxdg -s import command imports the disk group only on the node on which you run the command. The nodes that import the same disk group become part of the sub-cluster for that disk group. |
Deporting shared disk groups | Deporting a shared disk group using the vxdg deport command deports the disk group only on the node on which you run the command. The node that deports a shared disk group leaves the disk group sub-cluster and may initiate a recovery in the disk group sub-cluster. |
CVM master and disk group master | Each disk group sub-cluster has a disk group master that handles the VxVM configuration changes. All disk group level operations run on the disk group master node. The disk group master node can be switched to any node in the disk group sub-cluster. A node can be configured as the disk group master for multiple disk group sub-clusters. |
FSS | VxVM auto-exports DAS storage from a cluster node to other nodes in the cluster when the FSS disk group is created. You can also manually export the storage before creating the FSS disk group using the vxdisk export command. If the storage is already exported, VxVM skips the auto-export operation. The FSS disk group can be imported on any node and may utilize the storage from outside the disk group sub-cluster . A node can export its DAS storage to multiple disk group sub-clusters. |
Command shipping within the disk group sub-cluster | The disk group operations must be run on the disk group master node. All commands other than vxdg that are run from the disk group sub-cluster slave nodes are shipped to the disk group sub-cluster master node. Disk group operations that run outside the disk group sub-cluster are not supported and will fail. |