Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.1 Release Notes - Linux
Last Published:
2018-01-09
Product(s):
InfoScale & Storage Foundation (7.1)
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Not supported in this release
- Technology Preview: Application isolation in CVM environments with disk group sub-clustering
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Virtualization known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Storage Foundation for Sybase ASE CE known issues
- Issues related to installation and upgrade
- Software Limitations
- Virtualization software limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation for Sybase ASE CE software limitations
- Documentation
Changing the disk group master manually
You can change the disk group master manually from one node in the sub-cluster to another node, while the sub-cluster is online. CVM migrates the master node, and reconfigures the sub-cluster.
To change the disk group master manually
- View the current master, run the following command:
# vxdg nidmap Nidmap of DG cluster dg1 Name CVM Nid CM Nid State sys1 0 0 Joined: Master sys2 1 1 Joined: Slave sys3 2 8 Joined: Slave sys4 3 10 Joined: Slave Nidmap of DG cluster dg2 Name CVM Nid CM Nid State sys3 1 0 Joined: Slave sys2 0 1 Joined: Master Nidmap of DG cluster dg3 Name CVM Nid CM Nid State sys1 1 0 Joined: Slave sys4 0 8 Joined: Master sys3 2 10 Joined: Slave
- From any node in the disk group sub-cluster, run the following command to change the master:
# vxdg -g dgname setmaster nodename
where nodename specifies the name of the new disk group master.
The following example shows changing the master on the disk group dg2 from sys2 to sys3:
# vxdg -g dg2 setmaster sys3
- Verify that the master has changed successfully:
# vxdg nidmap dg2 Nidmap of DG cluster dg2 Name CVM Nid CM Nid State sys3 1 0 Joined: Master sys2 0 1 Joined: Slave