Veritas InfoScale 7.3.1 Release Notes - Windows
- Release notes for Veritas InfoScale
- Limitations
- Deployment limitations
- Cluster management limitations
- Storage management limitations
- Multi-pathing limitations
- Replication limitations
- Solution configuration limitations
- Internationalization and localization limitations
- Interoperability limitations
- Known issues
- Deployment issues
- Cluster management issues
- Cluster Server (VCS) issues
- Cluster Manager (Java Console) issues
- Global service group issues
- VMware virtual environment-related issues
- Cluster Server (VCS) issues
- Storage management issues
- Storage Foundation
- VEA console issues
- Snapshot and restore issues
- Snapshot scheduling issues
- Storage Foundation
- Multi-pathing issues
- Replication issues
- Solution configuration issues
- Disaster recovery (DR) configuration issues
- Fire drill (FD) configuration issues
- Quick recovery (QR) configuration issues
- Internationalization and localization issues
- Interoperability issues
- Miscellaneous issues
- Fibre Channel adapter issues
- Deployment issues
Global group fails to come online on the DR site with a message that it is in the middle of a group operation
When the node that runs a global group faults, VCS internally sets the MigrateQ attribute for the group and attempts to fail over the global group to another node within the local cluster. The MigrateQ attribute stores the node name on which the group was online. If the failover within the cluster does not succeed, then VCS clears the MigrateQ attribute for the groups. However, if the groups have dependencies which are more than one-level deep, then VCS does not clear the MigrateQ attribute for all the groups.(1795151)
This defect causes VCS to misinterpret that the group is in the middle of a failover operation within the local cluster and prevents the group to come online on the DR site. The following message is displayed:
VCS Warning V-16-1-51042 Cannot online group global_group. Group is in the middle of a group operation in cluster local_cluster.
Workaround: Perform the following steps on a node in the local cluster which is in the running state.
To bring the global group online on the DR site
- Check whether the MigrateQ attribute is set for the global group that you want to bring online on the remote cluster. Type the following on the command prompt:
hagrp -display -all -attribute MigrateQ
This command displays the name of the faulted node on which the group was online.
- Flush the global group that you want to bring online on the remote cluster. Type the following on the command prompt:
hagrp -flush global_group -sys faulted_node -clus local_cluster
where:
global_group is the group that you want to bring online on the remote cluster.
faulted_node is the node in the local cluster that hosted the global group and has faulted.
local_cluster is the cluster at the local site.
The flush operation clears the node name from the MigrateQ attribute.
- Bring the service group online on the remote cluster.
Type the following on the command prompt:
hagrp -online global_group -any -clus remote_cluster