Please enter search query.
 
              Search <book_title>...
            
 
          Storage Foundation and High Availability 7.4.2 Configuration and Upgrade Guide - Linux
                Last Published: 
				2020-07-30
                
              
              
                Product(s): 
				InfoScale & Storage Foundation (7.4.2)
                 
              
              
                Platform: Linux
              
            - Section I. Introduction to SFHA
- Introducing Storage Foundation and High Availability
 
 - Section II. Configuration of SFHA
- Preparing to configure
 - Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
 - Setting up the CP server
- Planning your CP server setup
 - Installing the CP server using the installer
 - Configuring the CP server cluster in secure mode
 - Setting up shared storage for the CP server database
 - Configuring the CP server using the installer program
 - Configuring the CP server manually
 - Configuring CP server using response files
 - Verifying the CP server configuration
 
 
 - Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Overview of tasks to configure SFHA using the product installer
 - Required information for configuring Storage Foundation and High Availability Solutions
 - Starting the software configuration
 - Specifying systems for configuration
 - Configuring the cluster name
 - Configuring private heartbeat links
 - Configuring the virtual IP of the cluster
 - Configuring SFHA in secure mode
 - Configuring a secure cluster node by node
 - Adding VCS users
 - Configuring SMTP email notification
 - Configuring SNMP trap notification
 - Configuring global clusters
 - Completing the SFHA configuration
 - About Veritas License Audit Tool
 - Verifying and updating licenses on the system
 
 - Configuring SFDB
 
 - Configuring Storage Foundation High Availability using the installer
 - Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
 - Setting up server-based I/O fencing using installer
 - Setting up non-SCSI-3 I/O fencing in virtual environments using installer
 - Setting up majority-based I/O fencing using installer
 - Enabling or disabling the preferred fencing policy
 
 - Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
 - Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the SFHA cluster
 - Generating the client key and certificates manually on the client nodes
 - Configuring server-based fencing on the SFHA cluster manually
 - Configuring CoordPoint agent to monitor coordination points
 - Verifying server-based I/O fencing configuration
 
 - Setting up non-SCSI-3 fencing in virtual environments manually
 - Setting up majority-based I/O fencing manually
 
 - Performing an automated SFHA configuration using response files
 - Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
 - Response file variables to configure disk-based I/O fencing
 - Sample response file for configuring disk-based I/O fencing
 - Response file variables to configure server-based I/O fencing
 - Sample response file for configuring non-SCSI-3 I/O fencing
 - Response file variables to configure non-SCSI-3 I/O fencing
 - Response file variables to configure majority-based I/O fencing
 - Sample response file for configuring majority-based I/O fencing
 
 
 - Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- About the upgrade
 - Supported upgrade paths
 - Considerations for upgrading SFHA to 7.4.2 on systems configured with an Oracle resource
 - Preparing to upgrade SFHA
 - Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
 
 - Upgrading Storage Foundation and High Availability
 - Performing a rolling upgrade of SFHA
 - Performing a phased upgrade of SFHA
- About phased upgrade
 - Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
 - Upgrading the operating system on the first subcluster
 - Upgrading the first subcluster
 - Preparing the second subcluster
 - Activating the first subcluster
 - Upgrading the operating system on the second subcluster
 - Upgrading the second subcluster
 - Finishing the phased upgrade
 
 
 - Performing an automated SFHA upgrade using response files
 - Performing post-upgrade tasks
- Optional configuration steps
 - Re-joining the backup boot disk group into the current disk group
 - Reverting to the backup boot disk group after an unsuccessful upgrade
 - Recovering VVR if automatic upgrade fails
 - Post-upgrade tasks when VCS agents for VVR are configured
 - Resetting DAS disk names to include host name in FSS environments
 - Upgrading disk layout versions
 - Upgrading VxVM disk group versions
 - Updating variables
 - Setting the default disk group
 - About enabling LDAP authentication for clusters that run in secure mode
 - Verifying the Storage Foundation and High Availability upgrade
 
 
 - Planning to upgrade SFHA
 - Section IV. Post-installation tasks
 - Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- About adding a node to a cluster
 - Before adding a node to a cluster
 - Adding a node to a cluster using the Veritas InfoScale installer
 - Adding the node to a cluster manually
 - Adding a node using response files
 - Configuring server-based fencing on the new node
 - After adding the new node
 - Adding nodes to a cluster that is using authentication for SFDB tools
 - Updating the Storage Foundation for Databases (SFDB) repository after adding a node
 
 - Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Verifying the status of nodes and service groups
 - Deleting the departing node from SFHA configuration
 - Modifying configuration files on each remaining node
 - Removing the node configuration from the CP server
 - Removing security credentials from the leaving node
 - Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
 - Updating the Storage Foundation for Databases (SFDB) repository after removing a node
 
 
 - Removing a node from a SFHA cluster
 
 - Adding a node to SFHA clusters
 - Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
 - Appendix B. SFHA services and ports
 - Appendix C. Configuration files
 - Appendix D. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
 - Manually configuring passwordless ssh
 - Setting up ssh and rsh connection using the installer -comsetup command
 - Setting up ssh and rsh connection using the pwdutil.pl utility
 - Restarting the ssh session
 - Enabling rsh for Linux
 
 - Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
 - Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
 - Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
 - The link command in the /etc/llttab file
 - The set-addr command in the /etc/llttab file
 - Selecting UDP ports
 - Configuring the netmask for LLT
 - Configuring the broadcast address for LLT
 - Sample configuration: direct-attached links
 - Sample configuration: links crossing IP routers
 
 - Using the UDP layer of IPv6 for LLT
 - Manually configuring LLT over UDP using IPv6
 - About configuring LLT over UDP multiport
 
 - Appendix G. Using LLT over RDMA
- Using LLT over RDMA
 - About RDMA over RoCE or InfiniBand networks in a clustering environment
 - How LLT supports RDMA capability for faster interconnects between applications
 - Using LLT over RDMA: supported use cases
 - Configuring LLT over RDMA
- Choosing supported hardware for LLT over RDMA
 - Installing RDMA, InfiniBand or Ethernet drivers and utilities
 - Configuring RDMA over an Ethernet network
 - Configuring RDMA over an InfiniBand network
 - Tuning system performance
 - Manually configuring LLT over RDMA
 - LLT over RDMA sample /etc/llttab
 - Verifying LLT configuration
 
 - Troubleshooting LLT over RDMA
- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
 - Ping test fails for the IP addresses configured over InfiniBand interfaces
 - After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
 - The LLT module fails to start
 
 
 
 
Setting the order of existing coordination points using the installer
To set the order of existing coordination points
- Start the installer with -fencing option.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files that you can access if there is a problem with the configuration process.
 - Confirm that you want to proceed with the I/O fencing configuration at the prompt.
The program checks that the local node running the script can communicate with remote nodes and checks whether SFHA 7.4.2 is configured properly.
 - Review the I/O fencing configuration options that the program presents.  Type  the number corresponding to the option that suggests  to set the order of existing coordination points.
For example:
Select the fencing mechanism to be configured in this Application Cluster [1-7,q] 7
Installer will ask the new order of existing coordination points. Then it will call vxfenswap utility to commit the coordination points change.
Warning:
The cluster might panic if a node leaves membership before the coordination points change is complete.
 - Review the current order of coordination points.
Current coordination points order: (Coordination disks/Coordination Point Server) Example, 1) /dev/vx/rdmp/emc_clariion0_65,/dev/vx/rdmp/emc_clariion0_66, /dev/vx/rdmp/emc_clariion0_62 2) [10.198.94.144]:443 3) [10.198.94.146]:443 b) Back to previous menu
 - Enter the new order of the coordination points by the numbers and separate the order by  space [1-3,b,q]   3 1  2.
New coordination points order: (Coordination disks/Coordination Point Server) Example, 1) [10.198.94.146]:443 2) /dev/vx/rdmp/emc_clariion0_65,/dev/vx/rdmp/emc_clariion0_66, /dev/vx/rdmp/emc_clariion0_62 3) [10.198.94.144]:443
 - Is this information correct? [y,n,q] (y).
Preparing vxfenmode.test file on all systems... Running vxfenswap... Successfully completed the vxfenswap operation
 - Do you want to send the information about this installation to us to help improve installation in the future? [y,n,q,?] (y).
 - Do you want to view the summary file? [y,n,q] (n).
 - Verify that the value of  vxfen_honor_cp_order  specified in  the 
/etc/vxfenmodefile is set to 1.For example, vxfen_mode=customized vxfen_mechanism=cps port=443 scsi3_disk_policy=dmp cps1=[10.198.94.146] vxfendg=vxfencoorddg cps2=[10.198.94.144] vxfen_honor_cp_order=1
 - Verify that the coordination point order is updated in the output of the vxfenconfig -l command.
For example, I/O Fencing Configuration Information: ====================================== single_cp=0 [10.198.94.146]:443 {e7823b24-1dd1-11b2-8814-2299557f1dc0} /dev/vx/rdmp/emc_clariion0_65 60060160A38B1600386FD87CA8FDDD11 /dev/vx/rdmp/emc_clariion0_66 60060160A38B1600396FD87CA8FDDD11 /dev/vx/rdmp/emc_clariion0_62 60060160A38B16005AA00372A8FDDD11 [10.198.94.144]:443 {01f18460-1dd2-11b2-b818-659cbc6eb360}