InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA- Preparing to configure
- Preparing to configure SFHA clusters for data integrity- About planning to configure I/O fencing
- Setting up the CP server- Planning your CP server setup
- Installing the CP server using the installer
- Configuring the CP server cluster in secure mode
- Setting up shared storage for the CP server database
- Configuring the CP server using the installer program
- Configuring the CP server manually
- Configuring CP server using response files
- Verifying the CP server configuration
 
 
- Configuring SFHA- Configuring Storage Foundation High Availability using the installer- Overview of tasks to configure SFHA using the product installer
- Required information for configuring Storage Foundation and High Availability Solutions
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SFHA in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the SFHA configuration
- About the License Audit Tool
- Verifying and updating licenses on the system
 
- Configuring SFDB
 
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
 
- Manually configuring SFHA clusters for data integrity- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually- Preparing the CP servers manually for use by the SFHA cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the SFHA cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
 
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
 
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
 
 
- Section III. Upgrade of SFHA- Planning to upgrade SFHA- About the upgrade
- Supported upgrade paths
- Considerations for upgrading SFHA to 9.0 on systems configured with an Oracle resource
- Preparing to upgrade SFHA
- Considerations for upgrading REST server
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
 
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA- About phased upgrade
- Performing a phased upgrade using the product installer- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
 
 
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks- Optional configuration steps
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- Recovering VVR if automatic upgrade fails
- Post-upgrade tasks when VCS agents for VVR are configured
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- Upgrading VxVM disk group versions
- Updating variables
- Setting the default disk group
- About enabling LDAP authentication for clusters that run in secure mode
- Verifying the Storage Foundation and High Availability upgrade
 
 
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes- Adding a node to SFHA clusters- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- After adding the new node
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
 
- Removing a node from SFHA clusters- Removing a node from a SFHA cluster- Verifying the status of nodes and service groups
- Deleting the departing node from SFHA configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing InfoScale Availability or Enterprise on the departing node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
 
 
- Removing a node from a SFHA cluster
 
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
 
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
 
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
 
- Appendix G. Using LLT over RDMA- Using LLT over RDMA
- About RDMA over RoCE or InfiniBand networks in a clustering environment
- How LLT supports RDMA capability for faster interconnects between applications
- Using LLT over RDMA: supported use cases
- Configuring LLT over RDMA- Choosing supported hardware for LLT over RDMA
- Installing RDMA, InfiniBand or Ethernet drivers and utilities
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- LLT over RDMA sample /etc/llttab
- Verifying LLT configuration
 
- Troubleshooting LLT over RDMA- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
- Ping test fails for the IP addresses configured over InfiniBand interfaces
- After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
- The LLT module fails to start
 
 
 
Planning an upgrade from the previous VVR version
If you plan to upgrade VVR from the previous VVR version, you can upgrade VVR with reduced application downtime by upgrading the hosts at separate times. While the Primary is being upgraded, the application can be migrated to the Secondary, thus reducing downtime. The replication between the (upgraded) Primary and the Secondary, which have different versions of VVR, will still continue. This feature facilitates high availability even when the VVR upgrade is not complete on both the sites. Veritas recommends that the Secondary hosts be upgraded before the Primary host in the Replicated Data Set (RDS).
For information regarding VVR support for replicating across Storage Foundation versions, refer to the Veritas InfoScale Release Notes.
Replicating between versions is intended to remove the restriction of upgrading the Primary and Secondary at the same time. VVR can continue to replicate an existing RDS with Replicated Volume Groups (RVGs) on the systems that you want to upgrade. When the Primary and Secondary are at different versions, VVR does not support changing the configuration with the vradmin command or creating a new RDS.
Also, if you specify TCP as the network protocol, the VVR versions on the Primary and Secondary determine whether the checksum is calculated. As shown in Table: VVR versions and checksum calculations, if either the Primary or Secondary are running a version of VVR prior to 9.0, and you use the TCP protocol, VVR calculates the checksum for every data packet it replicates. If the Primary and Secondary are at VVR 9.0, VVR does not calculate the checksum. Instead, it relies on the TCP checksum mechanism.
Table: VVR versions and checksum calculations
| VVR prior to 9.0 (DG version <= 140) | VVR 9.0 (DG version >= 310) | VVR calculates checksum TCP connections? | 
|---|---|---|
| Primary | Secondary | Yes | 
| Secondary | Primary | Yes | 
| Primary and Secondary | Yes | |
| Primary and Secondary | No | 
Note:
When replicating between versions of VVR, avoid using commands associated with new features. The earlier version may not support new features and problems could occur.
If you do not need to upgrade all the hosts in the RDS simultaneously, you can use replication between versions after you upgrade one host. You can then upgrade the other hosts in the RDS later at your convenience.
Note:
If you have a cluster setup, you must upgrade all the nodes in the cluster at the same time.