InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Planning your CP server setup
- Installing the CP server using the installer
- Configuring the CP server cluster in secure mode
- Setting up shared storage for the CP server database
- Configuring the CP server using the installer program
- Configuring the CP server manually
- Verifying the CP server configuration
- Configuring SFCFSHA
- Overview of tasks to configure SFCFSHA using the product installer
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SFCFSHA in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the SFCFSHA configuration
- About the License Audit Tool
- Verifying and updating licenses on the system
- Configuring SFDB
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Configuring CP server using response files
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the SFCFSHA cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the SFCFSHA cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- About the upgrade
- Supported upgrade paths
- Transitioning between the InfoScale products
- Considerations for upgrading SFCFSHA to 9.0 on systems configured with an Oracle resource
- Preparing to upgrade SFCFSHA
- Considerations for upgrading REST server
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the SFCFSHA stack on the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Completing the phased upgrade
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Resetting DAS disk names to include host name in FSS environments
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- CVM master node needs to assume the logowner role for VCS managed VVR resources
- Consideration when KMS is used for volume encryption
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- After adding the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- Configuring the ClusterService group for the new node
- Adding a node using response files
- Configuring server-based fencing on the new node
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Sample configuration file for adding a node to the cluster
- Removing a node from SFCFSHA clusters
- About removing a node from a cluster
- Removing a node from a cluster
- Modifying the VCS configuration files on existing nodes
- Modifying the Cluster Volume Manager (CVM) configuration on the existing nodes to remove references to the deleted node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Sample configuration file for removing a node from the cluster
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Using LLT over RDMA
- About RDMA over RoCE or InfiniBand networks in a clustering environment
- How LLT supports RDMA capability for faster interconnects between applications
- Using LLT over RDMA: supported use cases
- Configuring LLT over RDMA
- Choosing supported hardware for LLT over RDMA
- Installing RDMA, InfiniBand or Ethernet drivers and utilities
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- LLT over RDMA sample /etc/llttab
- Verifying LLT configuration
- Troubleshooting LLT over RDMA
- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
- Ping test fails for the IP addresses configured over InfiniBand interfaces
- After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
- The LLT module fails to start
Adding a node to a cluster using the Veritas InfoScale installer
You can add a node to a cluster using the -addnode option with the Veritas InfoScale installer.
The Veritas InfoScale installer performs the following tasks:
Verifies that the node and the existing cluster meet communication requirements.
Verifies the products and RPMs installed but not configured on the new node.
Discovers the network interfaces on the new node and checks the interface settings.
Creates the following files on the new node:
/etc/llttab/etc/VRTSvcs/conf/sysnameUpdates and copies the following files to the new node from the existing node:
/etc/llthosts/etc/gabtab/etc/VRTSvcs/conf/config/main.cfCopies the following files from the existing cluster to the new node:
/etc/vxfenmode/etc/vxfendg/etc/vx/.uuids/clusuuid/etc/sysconfig/llt/etc/sysconfig/gab/etc/sysconfig/vxfenConfigures disk-based or server-based fencing depending on the fencing mode in use on the existing cluster.
Adds the new node to the CVM, ClusterService service groups in the VCS configuration.
Note:
For other service groups configured under VCS, update the configuration for the new node manually.
Starts SFCFSHA processes and configures CVM and CFS on the new node.
At the end of the process, the new node joins the SFCFSHA cluster.
Enable the required LLT ports on the firewall.
See Enabling LLT ports in firewall.
Note:
If you have configured server-based fencing on the existing cluster, make sure that the CP server does not contain entries for the new node. If the CP server already contains entries for the new node, remove these entries before adding the node to the cluster, otherwise the process may fail with an error.
To add the node to an existing cluster using the installer
- Log in as the root user on one of the nodes of the existing cluster.
- Run the Veritas InfoScale installer with the -addnode option.
# cd /opt/VRTS/install
# ./installer -addnode
The installer displays the copyright message and the location where it stores the temporary installation logs.
- Enter the name of a node in the existing SFCFSHA cluster.
The installer uses the node information to identify the existing cluster.
Enter the name of any one node of the InfoScale ENTERPRISE cluster where you would like to add one or more new nodes: sys1
- Review and confirm the cluster information.
- Enter the name of the systems that you want to add as new nodes to the cluster.
Enter the system names separated by spaces to add to the cluster: sys5
Confirm if the installer prompts if you want to add the node to the cluster.
The installer checks the installed products and RPMs on the nodes and discovers the network interfaces.
- Enter the name of the network interface that you want to configure as the first private heartbeat link.
Enter the NIC for the first private heartbeat link on sys5: [b,q,?] eth1
Enter the NIC for the second private heartbeat link on sys5: [b,q,?] eth2
Note:
At least two private heartbeat links must be configured for high availability of the cluster.
- Depending on the number of LLT links configured in the existing cluster, configure additional private heartbeat links for the new node.
The installer verifies the network interface settings and displays the information.
- Review and confirm the information.
- If you have configured SMTP, SNMP, or the global cluster option in the existing cluster, you are prompted for the NIC information for the new node.
Enter the NIC for VCS to use on sys5: eth3
- The installer prompts you with an option to mount the shared volumes on the new node. Select y to mount them.
When completed, the installer confirms the volumes are mounted. The installer indicates the location of the log file, summary file, and response file with details of the actions performed.
- If the existing cluster uses server-based fencing, the installer will configure server-based fencing on the new nodes.
The installer then starts all the required processes and joins the new node to cluster.
The installer indicates the location of the log file, summary file, and response file with details of the actions performed.
If you have enabled security on the cluster, the installer displays the following message:
Since the cluster is in secure mode, check the main.cf whether you need to modify the usergroup that you would like to grant read access. If needed, use the following commands to modify:
# haconf -makerw
# hauser -addpriv <user group> GuestGroup
# haconf -dump -makero
- Confirm that the new node has joined the SFCFSHA cluster using lltstat -n and gabconfig -a commands.