Please enter search query.
Search <book_title>...
Storage Foundation and High Availability 7.4.2 Configuration and Upgrade Guide - AIX
Last Published:
2020-07-30
Product(s):
InfoScale & Storage Foundation (7.4.2)
Platform: AIX
- Section I. Introduction to SFHA
- Introducing Storage Foundation and High Availability
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Planning your CP server setup
- Installing the CP server using the installer
- Configuring the CP server cluster in secure mode
- Setting up shared storage for the CP server database
- Configuring the CP server using the installer program
- Configuring the CP server manually
- Configuring CP server using response files
- Verifying the CP server configuration
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Overview of tasks to configure SFHA using the product installer
- Required information for configuring Storage Foundation and High Availability Solutions
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SFHA in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the SFHA configuration
- About Veritas License Audit Tool
- Verifying and updating licenses on the system
- Configuring SFDB
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the SFHA cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the SFHA cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- About the upgrade
- Supported upgrade paths
- Considerations for upgrading SFHA to 7.4.2 on systems configured with an Oracle resource
- Preparing to upgrade SFHA
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
- Performing an automated SFHA upgrade using response files
- Performing post-upgrade tasks
- Optional configuration steps
- Recovering VVR if automatic upgrade fails
- Post-upgrade tasks when VCS agents for VVR are configured
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- Upgrading VxVM disk group versions
- Updating variables
- Setting the default disk group
- About enabling LDAP authentication for clusters that run in secure mode
- Verifying the Storage Foundation and High Availability upgrade
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- After adding the new node
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Verifying the status of nodes and service groups
- Deleting the departing node from SFHA configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Support for AIX Live Update
- Appendix B. Installation scripts
- Appendix C. SFHA services and ports
- Appendix D. Configuration files
- Appendix E. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for AIX
- Appendix F. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix G. Changing NFS server major numbers for VxVM volumes
- Appendix H. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Configuring cluster processes on the new node
Perform the steps in the following procedure to configure cluster processes on the new node.
- Edit the /etc/llthosts file on the existing nodes. Using vi or another text editor, add the line for the new node to the file. The file resembles:
0 sys1 1 sys2 2 sys5
- Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster.
- Create an /etc/llttab file on the new system. For example:
set-node Sys5 set-cluster 101
link en1 /dev/dlpi/en:1 - ether - - link en2 /dev/dlpi/en:2 - ether - -
Except for the first line that refers to the node, the file resembles the /etc/llttab files on the existing nodes. The second line, the cluster ID, must be the same as in the existing nodes.
- Use vi or another text editor to create the file
/etc/gabtabon the new node. This file must contain a line that resembles the following example:/sbin/gabconfig -c -nN
Where N represents the number of systems in the cluster including the new node. For a three-system cluster, N would equal 3.
- Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file on the new system.
- Use vi or another text editor to create the file
/etc/VRTSvcs/conf/sysnameon the new node. This file must contain the name of the new node added to the cluster.For example:
Sys5
- Create the Unique Universal Identifier file
/etc/vx/.uuids/clusuuidon the new node:# /opt/VRTSvcs/bin/uuidconfig.pl -rsh -clus -copy \ -from_sys sys1 -to_sys Sys5
- Start the LLT, GAB, and ODM drivers on the new node:
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
# /etc/rc.d/rc2.d/S99odm start
- On the new node, verify that the GAB port memberships:
# gabconfig -a GAB Port Memberships =============================================================== Port a gen df204 membership 012