Veritas InfoScale™ 7.3.1 Troubleshooting Guide - Solaris
- Introduction
- Section I. Troubleshooting Veritas File System
- Section II. Troubleshooting Veritas Volume Manager- Recovering from hardware failure- About recovery from hardware failure
- Listing unstartable volumes
- Displaying volume and plex states
- The plex state cycle
- Recovering an unstartable mirrored volume
- Recovering an unstartable volume with a disabled plex in the RECOVER state
- Forcibly restarting a disabled volume
- Clearing the failing flag on a disk
- Reattaching failed disks
- Recovering from a failed plex attach or synchronization operation
- Failures on RAID-5 volumes
- Recovering from an incomplete disk group move
- Restarting volumes after recovery when some nodes in the cluster become unavailable
- Recovery from failure of a DCO volume
 
- Recovering from instant snapshot failure- Recovering from the failure of vxsnap prepare
- Recovering from the failure of vxsnap make for full-sized instant snapshots
- Recovering from the failure of vxsnap make for break-off instant snapshots
- Recovering from the failure of vxsnap make for space-optimized instant snapshots
- Recovering from the failure of vxsnap restore
- Recovering from the failure of vxsnap refresh
- Recovering from copy-on-write failure
- Recovering from I/O errors during resynchronization
- Recovering from I/O failure on a DCO volume
- Recovering from failure of vxsnap upgrade of instant snap data change objects (DCOs)
 
- Recovering from failed vxresize operation
- Recovering from boot disk failure- VxVM and boot disk failure
- Possible root, swap, and usr configurations
- Booting from an alternate boot disk on Solaris SPARC systems
- The boot process on Solaris SPARC systems
- Hot-relocation and boot disk failure
- Recovery from boot failure
- Repair of root or /usr file systems on mirrored volumes
- Replacement of boot disks
- Recovery by reinstallation
 
- Managing commands, tasks, and transactions
- Backing up and restoring disk group configurations
- Troubleshooting issues with importing disk groups
- Recovering from CDS errors
- Logging and error messages
- Troubleshooting Veritas Volume Replicator- Recovery from RLINK connect problems
- Recovery from configuration errors- Errors during an RLINK attach
- Errors during modification of an RVG
 
- Recovery on the Primary or Secondary- About recovery from a Primary-host crash
- Recovering from Primary data volume error
- Primary SRL volume error cleanup and restart
- Primary SRL volume error at reboot
- Primary SRL volume overflow recovery
- Primary SRL header error cleanup and recovery
- Secondary data volume error cleanup and recovery
- Secondary SRL volume error cleanup and recovery
- Secondary SRL header error cleanup and recovery
- Secondary SRL header error at reboot
 
 
- Troubleshooting issues in cloud deployments
 
- Recovering from hardware failure
- Section III. Troubleshooting Dynamic Multi-Pathing
- Section IV. Troubleshooting Storage Foundation Cluster File System High Availability- Troubleshooting Storage Foundation Cluster File System High Availability- About troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting CFS
- Troubleshooting fenced configurations
- Troubleshooting Cluster Volume Manager in Veritas InfoScale products clusters- CVM group is not online after adding a node to the Veritas InfoScale products cluster
- Shared disk group cannot be imported in Veritas InfoScale products cluster
- Unable to start CVM in Veritas InfoScale products cluster
- Removing preexisting keys
- CVMVolDg not online even though CVMCluster is online in Veritas InfoScale products cluster
- Shared disks not visible in Veritas InfoScale products cluster
 
 
 
- Troubleshooting Storage Foundation Cluster File System High Availability
- Section V. Troubleshooting Cluster Server- Troubleshooting and recovery for VCS- VCS message logging- Log unification of VCS agent's entry points
- Enhancing First Failure Data Capture (FFDC) to troubleshoot VCS resource's unexpected behavior
- GAB message logging
- Enabling debug logs for agents
- Enabling debug logs for IMF
- Enabling debug logs for the VCS engine
- About debug log tags usage
- Gathering VCS information for support analysis
- Gathering LLT and GAB information for support analysis
- Gathering IMF information for support analysis
- Message catalogs
 
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting Intelligent Monitoring Framework (IMF)
- Troubleshooting service groups- VCS does not automatically start service group
- System is not in RUNNING state
- Service group not configured to run on the system
- Service group not configured to autostart
- Service group is frozen
- Failover service group is online on another system
- A critical resource faulted
- Service group autodisabled
- Service group is waiting for the resource to be brought online/taken offline
- Service group is waiting for a dependency to be met.
- Service group not fully probed.
- Service group does not fail over to the forecasted system
- Service group does not fail over to the BiggestAvailable system even if FailOverPolicy is set to BiggestAvailable
- Restoring metering database from backup taken by VCS
- Initialization of metering database fails
 
- Troubleshooting resources
- Troubleshooting I/O fencing- Node is unable to join cluster while another node is being ejected
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Manually removing existing keys from SCSI-3 disks
- System panics to prevent potential data corruption
- Cluster ID on the I/O fencing key of coordinator disk does not match the local cluster's ID
- Fencing startup reports preexisting split-brain
- Registered keys are lost on the coordinator disks
- Replacing defective disks when the cluster is offline
- The vxfenswap utility exits if rcp or scp commands are not functional
- Troubleshooting CP server
- Troubleshooting server-based fencing on the Veritas InfoScale products cluster nodes
- Issues during online migration of coordination points
 
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting the steward process
- Troubleshooting licensing- Validating license keys
- Licensing error messages- [Licensing] Insufficient memory to perform operation
- [Licensing] No valid VCS license keys were found
- [Licensing] Unable to find a valid base VCS license key
- [Licensing] License key cannot be used on this OS platform
- [Licensing] VCS evaluation period has expired
- [Licensing] License key can not be used on this system
- [Licensing] Unable to initialize the licensing framework
- [Licensing] QuickStart is not supported in this release
- [Licensing] Your evaluation period for the feature has expired. This feature will not be enabled the next time VCS starts
 
 
- Verifying the metered or forecasted values for CPU, Mem, and Swap
 
- VCS message logging
 
- Troubleshooting and recovery for VCS
- Section VI. Troubleshooting SFDB
Recovery from failure of a DCO volume
The procedure to recover from the failure of a data change object (DCO) volume depends on the DCO version number.
For information about DCO versioning, see the Storage Foundation Administrator's Guide.
Persistent FastResync uses a DCO volume to perform tracking of changed regions in a volume. If an error occurs while reading or writing a DCO volume, it is detached and the badlog flag is set on the DCO. All further writes to the volume are not tracked by the DCO.
The following sample output from the vxprint command shows a complete volume with a detached DCO volume (the TUTIL0 and PUTIL0 fields are omitted for clarity):
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE ... dg mydg mydg - - - - dm mydg01 c4t50d0s2 - 35521408 - - dm mydg02 c4t51d0s2 - 35521408 - - dm mydg03 c4t52d0s2 - 35521408 - FAILING dm mydg04 c4t53d0s2 - 35521408 - FAILING dm mydg05 c4t54d0s2 - 35521408 - - v SNAP-vol1 fsgen ENABLED 204800 - ACTIVE pl vol1-03 SNAP-vol1 ENABLED 204800 - ACTIVE sd mydg05-01 vol1-03 ENABLED 204800 0 - dc SNAP-vol1_dco SNAP-vol1 - - - - v SNAP-vol1_dcl gen ENABLED 144 - ACTIVE pl vol1_dcl-03 SNAP-vol1_dcl ENABLED 144 - ACTIVE sd mydg05-02 vol1_dcl-03 ENABLED 144 0 - sp vol1_snp SNAP-vol1 - - - - v vol1 fsgen ENABLED 204800 - ACTIVE pl vol1-01 vol1 ENABLED 204800 - ACTIVE sd mydg01-01 vol1-01 ENABLED 204800 0 - pl vol1-02 vol1 ENABLED 204800 - ACTIVE sd mydg02-01 vol1-01 ENABLED 204800 0 - dc vol1_dco vol1 - - - BADLOG v vol1_dcl gen DETACHED 144 - DETACH pl vol1_dcl-01 vol1_dcl ENABLED 144 - ACTIVE sd mydg03-01 vol1_dcl-01 ENABLED 144 0 - pl vol1_dcl-02 vol1_dcl DETACHED 144 - IOFAIL sd mydg04-01 vol1_dcl-02 ENABLED 144 0 RELOCATE sp SNAP-vol1_snp vol1 - - - -
This output shows the mirrored volume, vol1, its snapshot volume, SNAP-vol1, and their respective DCOs, vol1_dco and SNAP-vol1_dco. The two disks, mydg03 and mydg04, that hold the DCO plexes for the DCO volume, vol1_dcl, of vol1 have failed. As a result, the DCO volume, vol1_dcl, of the volume, vol1, has been detached and the state of vol1_dco has been set to BADLOG. For future reference, note the entries for the snap objects, vol1_snp and SNAP-vol1_snp, that point to vol1 and SNAP-vol1 respectively.
You can use such output to deduce the name of a volume's DCO (in this example, vol1_dco), or you can use the following vxprint command to display the name of a volume's DCO:
# vxprint [-g diskgroup] -F%dco_name volume
You can use the vxprint command to check if the badlog flag is set for the DCO of a volume as shown here:
# vxprint [-g diskgroup] -F%badlog dco_name
For example:
# vxprint -g mydg -F%badlog vol1_dco on
In this example, the command returns the value on, indicating that the badlog flag is set.
Use the following command to verify the version number of the DCO:
# vxprint [-g diskgroup] -F%version dco_name
For example:
# vxprint -g mydg -F%version vol1_dco
The command returns a value of 0, 20, or 30. The DCO version number determines the recovery procedure that you should use.