"Error saving vxprint file" reported when importing a Cluster Volume Manager disk group in a Veritas Storage Foundation (tm) for Oracle RAC, Storage Foundation Cluster File System

Article: 100017031
Last Published: 2021-12-14
Ratings: 0 0
Product(s): InfoScale & Storage Foundation

Problem

The import a Cluster Volume Manager disk group in a Veritas Storage Foundation (tm) for Oracle RAC, Storage Foundation Cluster File System environment fails with error message "Error saving vxprint file"

 

Error Message

VCS ERROR V-16-10001-1044 (node1) CVMVolDg:OraSrvm_dg:online: Error saving vxprint file

 

Solution

Summary:

In certain circumstances, Cluster Volume Manager will fail to auto-import the disk group. This occurs when the disk group is onlined on the slave Cluster Volume Manager node, after it had been recently onlined/offlined on the master node while the slave node was offline. If the slave node comes online after the master node is online, then the slave node gets the sequence number (seqno) from the master node and no issues are seen.

This issue occurs when the last master node to deport the Cluster Volume Manager disk group updates the "seqno" for the disk group and flushes this configuration information to disk on deport.
 
When the other node, which was slave last time the disk group was imported, tries to immediately import the disk group, it does so with the help of stale in-memory information regarding the seqno for the disk group from the last time it had imported the disk group.
 
This configuration mismatch causes the import to fail. The workaround involves "refreshing" the in-memory copy of the disk group configuration from the disk-copy by running  the command vxdisk -o alldgs list or vxdisk -a online before onlining the disk group.


Scenario:

Initially, node0 is the master, node1 is the slave

1. Online Cluster Volume Manager on both the nodes:
vxclustadm -m vcs -t gab startnode (on node0 and node1)

2. Offline Cluster Volume Manager on both nodes
vxclustadm stopnode (on node0 and node1)

3. Online Cluster Volume Manager on node0
vxclustadm -m vcs -t gab startnode (on node0)

4. Offline Cluster Volume Manager on node0
vxclustadm stopnode (on node0)

5. Online Cluster Volume Manger on node1-> Will fail to auto-import the shared diskgroup
vxclustadm -m vcs -t gab startnode (on node1)

The problem is that the online of the disk group will fail, requiring administrator intervention to clear the fault.

Workaround:
============================
Before onlining a Cluster Volume Manager disk group using vxclustadm , or under Veritas Cluster Server when onlining the cvm_clust resource, the following commands must be run to rescan the disks to allow the import to succeed:
 
1. Resync the Veritas Volume Manager objects on-disk and in-memory:
# vxdisk -o alldgs list
or
# vxdisk -a online
 

2. Import the disk group using the Cluster Volume Manager command vxclustadm or the Cluster Server command hares:
# vxclustadm -m vcs -t gab startnode
or
# hares -online cvm -sys node1

Note: Under Veritas Cluster Server, if the Cluster Volume Manager service group has faulted after its default 60 second timeout period, then this Cluster Server fault must be cleared, and the group associated with it re-onlined.
Example:
# hagrp -clear cvm
# hagrp -online cvm -sys node1
 
 

 

Was this content helpful?