Storage Foundation for Sybase ASE CE 7.4 Administrator's Guide - Linux
- Overview of Storage Foundation for Sybase ASE CE
- About Storage Foundation for Sybase ASE CE
- How SF Sybase CE works (high-level perspective)
- About SF Sybase CE components
- About optional features in SF Sybase CE
- How the agent makes Sybase highly available
- About Veritas InfoScale Operations Manager
- Administering SF Sybase CE and its components
- Administering SF Sybase CE
- Setting the environment variables for SF Sybase CE
- Starting or stopping SF Sybase CE on each node
- Applying operating system updates on SF Sybase CE nodes
- Adding storage to an SF Sybase CE cluster
- Recovering from storage failure
- Enhancing the performance of SF Sybase CE clusters
- Verifying the nodes in an SF Sybase CE cluster
- Administering VCS
- Viewing available Veritas device drivers
- Starting and stopping VCS
- Environment variables to start and stop VCS modules
- Adding and removing LLT links
- Configuring aggregated interfaces under LLT
- Displaying the cluster details and LLT version for LLT links
- Configuring destination-based load balancing for LLT
- Enabling and disabling intelligent resource monitoring for agents manually
- Administering the AMF kernel driver
- Administering I/O fencing
- About administering I/O fencing
- About the vxfentsthdw utility
- General guidelines for using the vxfentsthdw utility
- About the vxfentsthdw command options
- Testing the coordinator disk group using the -c option of vxfentsthdw
- Performing non-destructive testing on the disks using the -r option
- Testing the shared disks using the vxfentsthdw -m option
- Testing the shared disks listed in a file using the vxfentsthdw -f option
- Testing all the disks in a disk group using the vxfentsthdw -g option
- Testing a disk with existing keys
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Enabling or disabling the preferred fencing policy
- About I/O fencing log files
- Administering CVM
- Establishing CVM cluster membership manually
- Changing the CVM master manually
- Importing a shared disk group manually
- Deporting a shared disk group manually
- Verifying if CVM is running in an SF Sybase CE cluster
- Verifying CVM membership state
- Verifying the state of CVM shared disk groups
- Verifying the activation mode
- Administering CFS
- Administering the Sybase agent
- Sybase agent functions
- Monitoring options for the Sybase agent
- Using the IPC Cleanup feature for the Sybase agent
- Configuring the service group Sybase using the command line
- Bringing the Sybase service group online
- Taking the Sybase service group offline
- Modifying the Sybase service group configuration
- Viewing the agent log for Sybase
- Administering SF Sybase CE
- Troubleshooting SF Sybase CE
- About troubleshooting SF Sybase CE
- Restarting the installer after a failed network connection
- Installer cannot create UUID for the cluster
- Troubleshooting I/O fencing
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Node is unable to join cluster while another node is being ejected
- System panics to prevent potential data corruption
- Cluster ID on the I/O fencing key of coordinator disk does not match the local cluster's ID
- Fencing startup reports preexisting split-brain
- Registered keys are lost on the coordinator disks
- Replacing defective disks when the cluster is offline
- Troubleshooting Cluster Volume Manager in SF Sybase CE clusters
- Restoring communication between host and disks after cable disconnection
- Shared disk group cannot be imported in SF Sybase CE cluster
- Error importing shared disk groups in SF Sybase CE cluster
- Unable to start CVM in SF Sybase CE cluster
- CVM group is not online after adding a node to the SF Sybase CE cluster
- CVMVolDg not online even though CVMCluster is online in SF Sybase CE cluster
- Shared disks not visible in SF Sybase CE cluster
- Troubleshooting interconnects
- Troubleshooting Sybase ASE CE
- Prevention and recovery strategies
- Prevention and recovery strategies
- Verification of GAB ports in SF Sybase CE cluster
- Examining GAB seed membership
- Manual GAB membership seeding
- Evaluating VCS I/O fencing ports
- Verifying normal functioning of VCS I/O fencing
- Managing SCSI-3 PR keys in SF Sybase CE cluster
- Identifying a faulty coordinator LUN
- Starting shared volumes manually
- Listing all the CVM shared disks
- I/O Fencing kernel logs
- Prevention and recovery strategies
- Tunable parameters
- Appendix A. Error messages
Testing the shared disks using the vxfentsthdw -m option
Review the procedure to test the shared disks. By default, the utility uses the -m option.
This procedure uses the diskpath_a disk in the steps.
If the utility does not show a message stating a disk is ready, verification has failed. Failure of verification can be the result of an improperly configured disk array. It can also be caused by a bad disk.
If the failure is due to a bad disk, remove and replace it. The vxfentsthdw utility indicates a disk can be used for I/O fencing with a message resembling:
The disk diskpath_a is ready to be configured for I/O Fencing on node system1
Note:
For A/P arrays, run the vxfentsthdw command only on active enabled paths.
To test disks using the vxfentsthdw script
- Make sure system-to-system communication is functioning properly.
- From one node, start the utility.
# vxfentsthdw [-n]
- After reviewing the overview and warning that the tests overwrite data on the disks, confirm to continue the process and enter the node names.
******** WARNING!!!!!!!! ******** THIS UTILITY WILL DESTROY THE DATA ON THE DISK!! Do you still want to continue : [y/n] (default: n) y Enter the first node of the cluster: system1 Enter the second node of the cluster: system2
Enter the names of the disks you are checking. For each node, the disk may be known by the same name:
Enter the disk name to be checked for SCSI-3 PGR on node system1 in the format: for dmp: /dev/vx/rdmp/sdx for raw: /dev/sdx Make sure it's the same disk as seen by nodes system1 and system2 /dev/sdr Enter the disk name to be checked for SCSI-3 PGR on node system2 in the format: for dmp: /dev/vx/rdmp/sdx for raw: /dev/sdx Make sure it's the same disk as seen by nodes system1 and system2 /dev/sdrIf the serial numbers of the disks are not identical, then the test terminates.
- Review the output as the utility performs the checks and report its activities.
- If a disk is ready for I/O fencing on each node, the utility reports success:
ALL tests on the disk diskpath_a have PASSED The disk is now ready to be configured for I/O Fencing on node system1 ... Removing test keys and temporary files, if any ... . .
- Run the vxfentsthdw utility for each disk you intend to verify.