Please enter search query.
 
              Search <book_title>...
            
 
          Storage Foundation and High Availability 8.0.2 Configuration and Upgrade Guide - Linux
                Last Published: 
				2023-06-05
                
              
              
                Product(s): 
				InfoScale & Storage Foundation (8.0.2)
                 
              
              
                Platform: Linux
              
            - Section I. Introduction to SFHA- Introducing Storage Foundation and High Availability
 
- Section II. Configuration of SFHA- Preparing to configure
- Preparing to configure SFHA clusters for data integrity- About planning to configure I/O fencing
- Setting up the CP server- Planning your CP server setup
- Installing the CP server using the installer
- Configuring the CP server cluster in secure mode
- Setting up shared storage for the CP server database
- Configuring the CP server using the installer program
- Configuring the CP server manually
- Configuring CP server using response files
- Verifying the CP server configuration
 
 
- Configuring SFHA- Configuring Storage Foundation High Availability using the installer- Overview of tasks to configure SFHA using the product installer
- Required information for configuring Storage Foundation and High Availability Solutions
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SFHA in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the SFHA configuration
- About Veritas License Audit Tool
- Verifying and updating licenses on the system
 
- Configuring SFDB
 
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
 
- Manually configuring SFHA clusters for data integrity- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually- Preparing the CP servers manually for use by the SFHA cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the SFHA cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
 
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
 
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
 
 
- Section III. Upgrade of SFHA- Planning to upgrade SFHA- About the upgrade
- Supported upgrade paths
- Considerations for upgrading SFHA to 8.0.2 on systems configured with an Oracle resource
- Preparing to upgrade SFHA
- Considerations for upgrading REST server
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
 
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA- About phased upgrade
- Performing a phased upgrade using the product installer- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
 
 
- Performing an automated SFHA upgrade using response files
- Performing post-upgrade tasks- Optional configuration steps
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- Recovering VVR if automatic upgrade fails
- Post-upgrade tasks when VCS agents for VVR are configured
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- Upgrading VxVM disk group versions
- Updating variables
- Setting the default disk group
- About enabling LDAP authentication for clusters that run in secure mode
- Verifying the Storage Foundation and High Availability upgrade
 
 
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes- Adding a node to SFHA clusters- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- After adding the new node
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
 
- Removing a node from SFHA clusters- Removing a node from a SFHA cluster- Verifying the status of nodes and service groups
- Deleting the departing node from SFHA configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
 
 
- Removing a node from a SFHA cluster
 
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
 
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
 
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
 
- Appendix G. Using LLT over RDMA- Using LLT over RDMA
- About RDMA over RoCE or InfiniBand networks in a clustering environment
- How LLT supports RDMA capability for faster interconnects between applications
- Using LLT over RDMA: supported use cases
- Configuring LLT over RDMA- Choosing supported hardware for LLT over RDMA
- Installing RDMA, InfiniBand or Ethernet drivers and utilities
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- LLT over RDMA sample /etc/llttab
- Verifying LLT configuration
 
- Troubleshooting LLT over RDMA- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
- Ping test fails for the IP addresses configured over InfiniBand interfaces
- After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
- The LLT module fails to start
 
 
 
Enabling LDAP authentication for clusters that run in secure mode
The following procedure shows how to enable the plug-in module for LDAP authentication. This section provides examples for OpenLDAP and Windows Active Directory LDAP distributions.
Before you enable the LDAP authentication, complete the following steps:
- Make sure that the cluster runs in secure mode. - # haclus -value SecureClus - The output must return the value as 1. 
- Make sure that the AT version is 6.1.6.0 or later. - # /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showversion vssat version: 6.1.14.26 
To enable OpenLDAP authentication for clusters that run in secure mode
- Run the LDAP configuration tool atldapconf using the -d option. The -d option discovers and retrieves an LDAP properties file which is a prioritized attribute list.# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \ -d -s domain_controller_name_or_ipaddress -u domain_user Attribute list file name not provided, using AttributeList.txt Attribute file created. You can use the catatldapconf command to view the entries in the attributes file. 
- 	Run the LDAP configuration tool  using the -c option. The -c option creates a CLI file to add the LDAP domain.# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \ -c -d LDAP_domain_name Attribute list file not provided, using default AttributeList.txt CLI file name not provided, using default CLI.txt CLI for addldapdomain generated. 
- Run the LDAP configuration tool atldapconf using the -x option. The -x option reads the CLI file and executes the commands to add a domain to the AT.# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -x Using default broker port 14149 CLI file not provided, using default CLI.txt Looking for AT installation... AT found installed at ./vssat Successfully added LDAP domain. 
- Check the AT version and list the LDAP domains to verify that the Windows Active Directory server integration is complete.# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showversion vssat version: 6.1.14.26 # /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat listldapdomains Domain Name : mydomain.com Server URL : ldap://192.168.20.32:389 SSL Enabled : No User Base DN : CN=people,DC=mydomain,DC=com User Object Class : account User Attribute : cn User GID Attribute : gidNumber Group Base DN : CN=group,DC=domain,DC=com Group Object Class : group Group Attribute : cn Group GID Attribute : cn Auth Type : FLAT Admin User : Admin User Password : Search Scope : SUB 
- Check the other domains in the cluster.# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showdomains -p vx The command output lists the number of domains that are found, with the domain names and domain types. 
- Generate credentials for the user.# unset EAT_LOG # /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat authenticate \ -d ldap:LDAP_domain_name -p user_name -s user_password -b \ localhost:14149 
- Add non-root users as applicable.# useradd user1 # passwd pw1 Changing password for "user1" user1's New password: Re-enter user1's new password: # su user1 # bash # id uid=204(user1) gid=1(staff) # pwd # mkdir /home/user1 # chown user1 /home/ user1 
- Add the non-root user to the VCS configuration.# haconf -makerw # hauser -add user1 # haconf -dump -makero 
- Log in as non-root user and run VCS commands as LDAP user.# cd /home/user1 # ls # cat .vcspwd 101 localhost mpise LDAP_SERVER ldap # unset VCS_DOMAINTYPE # unset VCS_DOMAIN # /opt/VRTSvcs/bin/hasys -state #System Attribute Value cluster1:sysA SysState FAULTED cluster1:sysB SysState FAULTED cluster2:sysC SysState RUNNING cluster2:sysD SysState RUNNING