Veritas InfoScale™ 7.1 Release Notes - Linux
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Not supported in this release
- Technology Preview: Application isolation in CVM environments with disk group sub-clustering
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Enabling the application isolation feature in CVM environments
Enabling the application isolation feature involves taking applications offline resulting in application downtime. The VCS configuration is updated to set the CVMDGSubClust attribute of the CVMCluster resource to 1. The change is persistent across cluster reboots.
The VCS port 'h' must be active for enabling the application isolation feature.
To enable the application isolation feature
- Verify that the GAB port h is active.
# hastatus
- Take the CVM group offline on all nodes in the cluster:
# hagrp -offline cvm -sys sys_name
- Enable the CVMDGSubClust attribute of the CVMCluster resource:
# haconf -makerw # hares -modify cvm_clus CVMDGSubClust 1 # haconf -dump -makero
- Bring the CVM group online on all nodes:
# hagrp -online cvm -sys sys_name
The application isolation capability is enabled and ready for use after the CVM group is brought online on all nodes.
- Import the disk groups to nodes in the cluster.
Since the disk group is not auto-imported on any node in the cluster, you must configure VCS to import the disk groups to the required nodes in the cluster. Use one of the following commands, depending on your need.
cfsdgadm
Use this option if you are using a block device.
cfsmntadm
Use this option if you plan to create a cluster file system on the block device.
# cfsdgadm add dgname sys1 sys2
This will update the VCS configuration file as follows:
group vrts_vea_cfs_int_cvmvoldg2 ( SystemList = { sys1 = 5, sys2 = 6 } AutoFailOver = 0 Parallel = 1 AutoStartList = { sys1, sys2 } ) CVMVolDg cvmvoldg2 ( Critical = 0 CVMDiskGroup = bdg CVMActivation @sys1 = sw CVMActivation @sys2 = sw NodeList = { sys1, sys2 } ) requires group cvm online local firm // resource dependency tree // // group vrts_vea_cfs_int_cvmvoldg2 // { // CVMVolDg cvmvoldg2 // }
# cfsmntadm add dgname volname /mnt1 \ sys1=cluster sys2=cluster
This will update the VCS configuration file as follows:
group vrts_vea_cfs_int_cfsmount1 ( SystemList = { sys1 = 5, sys2 = 7 } AutoFailOver = 0 Parallel = 1 AutoStartList = { sys1, sys2 } ) CFSMount cfsmount1 ( Critical = 0 MountPoint = "/mnt1" BlockDevice = "/dev/vx/dsk/adg/avol1" MountOpt @sys1 = "cluster" MountOpt @sys2 = "cluster" NodeList = { sys1, sys2 } ) CVMVolDg cvmvoldg1 ( Critical = 0 CVMDiskGroup = adg CVMVolume = { avol1 } CVMActivation @sys1 = sw CVMActivation @sys2 = sw NodeList = { sys1, sys2 } ) requires group cvm online local firm cfsmount1 requires cvmvoldg1 // resource dependency tree // // group vrts_vea_cfs_int_cfsmount1 // { // CFSMount cfsmount1 // { // CVMVolDg cvmvoldg1 // } // }
- Verify that the application isolation feature is enabled:
# hares -display cvm_clus | grep CVMDGSubClust cvm_clus CVMDGSubClust global 1