Veritas InfoScale™ 7.1 Release Notes - Linux
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Not supported in this release
- Technology Preview: Application isolation in CVM environments with disk group sub-clustering
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Technology Preview: Application isolation in CVM environments with disk group sub-clustering
Veritas InfoScale introduces a technology preview of the application isolation feature for non-production environments. This is an early access initiative intended solely for test environments.
Veritas InfoScale supports application isolation in a CVM cluster through the creation of disk group sub-clusters. A disk group sub-cluster consists of a logical grouping of nodes that can selectively import or deport shared disk groups. The shared disk groups are not imported or deported on all nodes in the cluster as in the traditional CVM environment. This minimizes the impact of node failures or configuration changes on applications in the cluster.
You can enable the application isolation feature by setting the CVMDGSubClust attribute for the CVMCluster resource in the VCS configuration file. When the cluster restarts, the feature is enabled and shared disk groups are not auto-imported to all nodes in the cluster. The first node that imports the disk group forms a disk group sub-cluster and is elected as the disk group master for the sub-cluster. The remaining nodes in the cluster that import the shared disk group are treated as slaves. All disk group level operations run on the master node of the disk group sub-cluster. You can switch the master at any time for each disk group sub-cluster. A node can play the role of a master for a sub-cluster as well as that of a slave for another sub-cluster.
If a node loses connectivity to the SAN, the I/Os for that node are shipped to another node in the disk group sub-cluster just as in traditional CVM environments. If the disk group fails due to failed I/Os on all disks in the disk group, it is disabled and the nodes that share the disk group must deport and import the disk group again.
A node can belong to multiple disk group sub-clusters. Each disk group sub-cluster provides all the capabilities of a clustered Veritas Volume Manager environment, with the exception of some features.
The following CVM features are not available in a disk group sub-cluster:
Rolling upgrade
Operations spanning multiple disk groups such as link-based snapshots, splitting and joining of disk groups
Cluster Volume Replication
The application isolation feature is supported with CVM protocol version 160 and above. It is disabled, by default, both after installation and upgrade.
Figure: Disk group sub-clustering for the application isolation feature illustrates disk group sub-clustering for the application isolation feature.
Figure: Failure management within disk group sub-clusters illustrates failure management within disk group sub-clusters.
The figure illustrates the following:
Selective import of disk groups, creating disk group sub-clusters
A configuration that includes both SAN and DAS storage
Node 6 is a storage-only node exporting its DAS storage to multiple disk group sub-clusters
A node can play multiple roles - master for one sub-cluster and a slave for another sub-cluster