Storage Foundation for Sybase ASE CE 7.4 Administrator's Guide - Linux
- Overview of Storage Foundation for Sybase ASE CE
- About Storage Foundation for Sybase ASE CE
- How SF Sybase CE works (high-level perspective)
- About SF Sybase CE components
- About optional features in SF Sybase CE
- How the agent makes Sybase highly available
- About Veritas InfoScale Operations Manager
- Administering SF Sybase CE and its components
- Administering SF Sybase CE
- Setting the environment variables for SF Sybase CE
- Starting or stopping SF Sybase CE on each node
- Applying operating system updates on SF Sybase CE nodes
- Adding storage to an SF Sybase CE cluster
- Recovering from storage failure
- Enhancing the performance of SF Sybase CE clusters
- Verifying the nodes in an SF Sybase CE cluster
- Administering VCS
- Viewing available Veritas device drivers
- Starting and stopping VCS
- Environment variables to start and stop VCS modules
- Adding and removing LLT links
- Configuring aggregated interfaces under LLT
- Displaying the cluster details and LLT version for LLT links
- Configuring destination-based load balancing for LLT
- Enabling and disabling intelligent resource monitoring for agents manually
- Administering the AMF kernel driver
- Administering I/O fencing
- About administering I/O fencing
- About the vxfentsthdw utility
- General guidelines for using the vxfentsthdw utility
- About the vxfentsthdw command options
- Testing the coordinator disk group using the -c option of vxfentsthdw
- Performing non-destructive testing on the disks using the -r option
- Testing the shared disks using the vxfentsthdw -m option
- Testing the shared disks listed in a file using the vxfentsthdw -f option
- Testing all the disks in a disk group using the vxfentsthdw -g option
- Testing a disk with existing keys
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Enabling or disabling the preferred fencing policy
- About I/O fencing log files
- Administering CVM
- Establishing CVM cluster membership manually
- Changing the CVM master manually
- Importing a shared disk group manually
- Deporting a shared disk group manually
- Verifying if CVM is running in an SF Sybase CE cluster
- Verifying CVM membership state
- Verifying the state of CVM shared disk groups
- Verifying the activation mode
- Administering CFS
- Administering the Sybase agent
- Sybase agent functions
- Monitoring options for the Sybase agent
- Using the IPC Cleanup feature for the Sybase agent
- Configuring the service group Sybase using the command line
- Bringing the Sybase service group online
- Taking the Sybase service group offline
- Modifying the Sybase service group configuration
- Viewing the agent log for Sybase
- Administering SF Sybase CE
- Troubleshooting SF Sybase CE
- About troubleshooting SF Sybase CE
- Restarting the installer after a failed network connection
- Installer cannot create UUID for the cluster
- Troubleshooting I/O fencing
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Node is unable to join cluster while another node is being ejected
- System panics to prevent potential data corruption
- Cluster ID on the I/O fencing key of coordinator disk does not match the local cluster's ID
- Fencing startup reports preexisting split-brain
- Registered keys are lost on the coordinator disks
- Replacing defective disks when the cluster is offline
- Troubleshooting Cluster Volume Manager in SF Sybase CE clusters
- Restoring communication between host and disks after cable disconnection
- Shared disk group cannot be imported in SF Sybase CE cluster
- Error importing shared disk groups in SF Sybase CE cluster
- Unable to start CVM in SF Sybase CE cluster
- CVM group is not online after adding a node to the SF Sybase CE cluster
- CVMVolDg not online even though CVMCluster is online in SF Sybase CE cluster
- Shared disks not visible in SF Sybase CE cluster
- Troubleshooting interconnects
- Troubleshooting Sybase ASE CE
- Prevention and recovery strategies
- Prevention and recovery strategies
- Verification of GAB ports in SF Sybase CE cluster
- Examining GAB seed membership
- Manual GAB membership seeding
- Evaluating VCS I/O fencing ports
- Verifying normal functioning of VCS I/O fencing
- Managing SCSI-3 PR keys in SF Sybase CE cluster
- Identifying a faulty coordinator LUN
- Starting shared volumes manually
- Listing all the CVM shared disks
- I/O Fencing kernel logs
- Prevention and recovery strategies
- Tunable parameters
- Appendix A. Error messages
Evaluating VCS I/O fencing ports
I/O Fencing (VxFEN) uses a dedicated port that GAB provides for communication across nodes in the cluster. You can see this port as port 'b' when gabconfig -a runs on any node in the cluster. The entry corresponding to port 'b' in this membership indicates the existing members in the cluster as viewed by I/O Fencing.
GAB uses port "a" for maintaining the cluster membership and must be active for I/O Fencing to start.
To check whether fencing is enabled in a cluster, the '-d' option can be used with vxfenadm (1M) to display the I/O Fencing mode on each cluster node. Port "b" membership should be present in the output of gabconfig -a and the output should list all the nodes in the cluster.
If the GAB ports that are needed for I/O fencing are not up, that is, if port "a" is not visible in the output of gabconfig -a command, LLT and GAB must be started on the node.
The following commands can be used to start LLT and GAB respectively:
To start LLT on each node:
For RHEL 7, SLES 12, and supported RHEL distributions:
# systemctl start llt
For earlier versions of RHEL, SLES, and supported RHEL distributions:
# /etc/init.d/llt start
If LLT is configured correctly on each node, the console output displays:
LLT INFO V-14-1-10009 LLT Protocol available
On a two node cluster, for example system1 and system2, checks you can run to make sure LLT is properly configured:
# /sbin/lltconfig System displays the state of LLT.
# cat /etc/llthosts 0 system1 1 system2
Check the llttab on both nodes:
# cat /etc/llttab set-node system1 set-cluster Cluster id link private_nic1 eth-00:15:17:48:b4:98 - ether - - link private_nic2 eth-00:15:17:48:b4:99 - ether - -
To start GAB, on each node:
For RHEL 7, SLES 12, and supported RHEL distributions:
# systemctl start gab
For earlier versions of RHEL, SLES, and supported RHEL distributions:
# /etc/init.d/gab start
If GAB is configured correctly on each node, the console output displays:
GAB INFO V-15-1-20021 GAB available
GAB INFO V-15-1-20026 Port a registration waiting for seed port membership
Check to make sure that GAB is properly configured:
# gabconfig -a |grep 'Port b' Port b gen 614604 membership 01 # cat /etc/gabtab /sbin/gabconfig -c -n2 <here it 2 as number of nodes are 2)