InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Planning your CP server setup
- Installing the CP server using the installer
- Configuring the CP server cluster in secure mode
- Setting up shared storage for the CP server database
- Configuring the CP server using the installer program
- Configuring the CP server manually
- Verifying the CP server configuration
- Configuring SFCFSHA
- Overview of tasks to configure SFCFSHA using the product installer
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SFCFSHA in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the SFCFSHA configuration
- About the License Audit Tool
- Verifying and updating licenses on the system
- Configuring SFDB
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Configuring CP server using response files
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the SFCFSHA cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the SFCFSHA cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- About the upgrade
- Supported upgrade paths
- Transitioning between the InfoScale products
- Considerations for upgrading SFCFSHA to 9.0 on systems configured with an Oracle resource
- Preparing to upgrade SFCFSHA
- Considerations for upgrading REST server
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the SFCFSHA stack on the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Completing the phased upgrade
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Resetting DAS disk names to include host name in FSS environments
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- CVM master node needs to assume the logowner role for VCS managed VVR resources
- Consideration when KMS is used for volume encryption
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- After adding the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- Configuring the ClusterService group for the new node
- Adding a node using response files
- Configuring server-based fencing on the new node
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Sample configuration file for adding a node to the cluster
- Removing a node from SFCFSHA clusters
- About removing a node from a cluster
- Removing a node from a cluster
- Modifying the VCS configuration files on existing nodes
- Modifying the Cluster Volume Manager (CVM) configuration on the existing nodes to remove references to the deleted node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Sample configuration file for removing a node from the cluster
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Using LLT over RDMA
- About RDMA over RoCE or InfiniBand networks in a clustering environment
- How LLT supports RDMA capability for faster interconnects between applications
- Using LLT over RDMA: supported use cases
- Configuring LLT over RDMA
- Choosing supported hardware for LLT over RDMA
- Installing RDMA, InfiniBand or Ethernet drivers and utilities
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- LLT over RDMA sample /etc/llttab
- Verifying LLT configuration
- Troubleshooting LLT over RDMA
- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
- Ping test fails for the IP addresses configured over InfiniBand interfaces
- After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
- The LLT module fails to start
Configuring the virtual IP of the cluster
You can configure the virtual IP of the cluster to use to connect from the Cluster Manager (Java Console), Arctera InfoScale Operations Manager, or to specify in the RemoteGroup resource.
See the Cluster Server Administrator's Guide for information on the Cluster Manager.
See the Cluster Server Bundled Agents Reference Guide for information on the RemoteGroup agent.
To configure the virtual IP of the cluster
- Review the required information to configure the virtual IP of the cluster.
- When the system prompts whether you want to configure the virtual IP, enter y.
- Confirm whether you want to use the discovered public NIC on the first system.
Do one of the following:
If the discovered NIC is the one to use, press Enter.
If you want to use a different NIC, type the name of a NIC to use and press Enter.
Active NIC devices discovered on sys1: eth0 Enter the NIC for Virtual IP of the Cluster to use on sys1: [b,q,?](eth0)
- Confirm whether you want to use the same public NIC on all nodes.
Do one of the following:
If all nodes use the same public NIC, enter y.
If unique NICs are used, enter n and enter a NIC for each node.
Is eth0 to be the public NIC used by all systems [y,n,q,b,?] (y)
- Enter the virtual IP address for the cluster.
You can enter either an IPv4 address or an IPv6 address.
For IPv4:
Enter the virtual IP address.
Enter the Virtual IP address for the Cluster: [b,q,?] 192.168.1.16
Confirm the default netmask or enter another one:
Enter the netmask for IP 192.168.1.16: [b,q,?] (255.255.240.0)
Verify and confirm the Cluster Virtual IP information.
Cluster Virtual IP verification: NIC: eth0 IP: 192.168.1.16 Netmask: 255.255.240.0Is this information correct? [y,n,q] (y)
For IPv6
Enter the virtual IP address.
Enter the Virtual IP address for the Cluster: [b,q,?] 2001:454e:205a:110:203:baff:feee:10
Enter the prefix for the virtual IPv6 address you provided. For example:
Enter the Prefix for IP 2001:454e:205a:110:203:baff:feee:10: [b,q,?] 64
Verify and confirm the Cluster Virtual IP information.
Cluster Virtual IP verification: NIC: eth0 IP: 2001:454e:205a:110:203:baff:feee:10 Prefix: 64Is this information correct? [y,n,q] (y)
If you want to set up trust relationships for your secure cluster, refer to the following topics: