InfoScale™ 9.0 Cluster Server Configuration and Upgrade Guide - AIX
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
 - Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
 - Setting up the CP server
 
 - Configuring VCS
- Overview of tasks to configure VCS using the product installer
 - Starting the software configuration
 - Specifying systems for configuration
 - Configuring the cluster name
 - Configuring private heartbeat links
 - Configuring the virtual IP of the cluster
 - Configuring VCS in secure mode
 - Setting up trust relationships for your VCS cluster
 - Configuring a secure cluster node by node
 - Adding VCS users
 - Configuring SMTP email notification
 - Configuring SNMP trap notification
 - Configuring global clusters
 - Completing the VCS configuration
 - About the License Audit Tool
 - Verifying and updating licenses on the system
 
 - Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
 - Setting up server-based I/O fencing using installer
 - Setting up non-SCSI-3 I/O fencing in virtual environments using installer
 - Setting up majority-based I/O fencing using installer
 - Enabling or disabling the preferred fencing policy
 
 
 - Section II. Automated configuration using response files
- Performing an automated VCS configuration
 - Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
 - Response file variables to configure disk-based I/O fencing
 - Sample response file for configuring disk-based I/O fencing
 - Response file variables to configure server-based I/O fencing
 - Sample response file for configuring server-based I/O fencing
 - Response file variables to configure non-SCSI-3 I/O fencing
 - Sample response file for configuring non-SCSI-3 I/O fencing
 - Response file variables to configure majority-based I/O fencing
 - Sample response file for configuring majority-based I/O fencing
 
 
 - Section III. Manual configuration
- Manually configuring VCS
- About configuring VCS manually
 - Configuring LLT manually
 - Configuring GAB manually
 - Configuring VCS manually
 - Configuring VCS in single node mode
 - Starting LLT, GAB, and VCS after manual configuration
 - About configuring cluster using VCS Cluster Configuration wizard
 - Before configuring a VCS cluster using the VCS Cluster Configuration wizard
 - Launching the VCS Cluster Configuration wizard
 - Configuring a cluster by using the VCS cluster configuration wizard
 - Adding a system to a VCS cluster
 - Modifying the VCS configuration
 
 - Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
 - Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the VCS cluster
 - Generating the client key and certificates manually on the client nodes
 - Configuring server-based fencing on the VCS cluster manually
 - Configuring CoordPoint agent to monitor coordination points
 - Verifying server-based I/O fencing configuration
 
 - Setting up non-SCSI-3 fencing in virtual environments manually
 - Setting up majority-based I/O fencing manually
 
 
 - Manually configuring VCS
 - Section IV. Upgrading VCS
- Planning to upgrade VCS
- About upgrading to VCS 9.0
 - Upgrading VCS in secure enterprise environments
 - Supported upgrade paths
 - Considerations for upgrading secure VCS 7.4.x clusters to VCS 9.0
 - Considerations for upgrading VCS to 9.0 on systems configured with an Oracle resource
 - Considerations for upgrading CP servers
 - Considerations for upgrading CP clients
 - Considerations for upgrading REST server
 - Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
 
 - Performing a VCS upgrade using the installer
- Before upgrading VCS using the script-based installer
 - Upgrading VCS using the product installer
 - Upgrading to 2048 bit key and SHA256 signature certificates
 - Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Deleting certificates of non-root users after upgrading to 2048 bit key and SHA256 signature certificates
 - Re-establishing WAC communication in global clusters after upgrading to 2048 bit key and SHA256 signature certificates
 - Re-establishing CP server and CP client communication after upgrading to 2048 bit key and SHA256 signature certificates
 - Re-establishing trust with Steward after upgrading to 2048 bit key and SHA256 signature certificates
 
 - Upgrading Steward to 2048 bit key and SHA256 signature certificates
 
 - Performing an online upgrade
 - Performing a phased upgrade of VCS
- About phased upgrade
 - Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
 - Upgrading the operating system on the first subcluster
 - Upgrading the first subcluster
 - Preparing the second subcluster
 - Activating the first subcluster
 - Upgrading the operating system on the second subcluster
 - Upgrading the second subcluster
 - Finishing the phased upgrade
 
 
 - Performing an automated VCS upgrade using response files
 
 - Planning to upgrade VCS
 - Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
 
 - Adding a node to a multi-node VCS cluster
- Adding nodes using the VCS installer
 - Manually adding a node to a cluster
- Setting up the hardware
 - Installing the VCS software manually when adding a node
 - Setting up the node to run in secure mode
 - Configuring LLT and GAB when adding a node to the cluster
 - Configuring I/O fencing on the new node
 - Adding the node to the existing cluster
 - Starting VCS and verifying the cluster
 - Adding a node using response files
 
 
 - Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Verifying the status of nodes and service groups
 - Deleting the departing node from VCS configuration
 - Modifying configuration files on each remaining node
 - Removing the node configuration from the CP server
 - Removing security credentials from the leaving node
 - Unloading LLT and GAB and removing InfoScale Availability or Enterprise on the departing node
 
 
 - Removing a node from a VCS cluster
 
 - Adding a node to a single-node cluster
 - Section VI. Installation reference
- Appendix A. Services and ports
 - Appendix B. Configuration files
 - Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
 - Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
 - The link command in the /etc/llttab file
 - The set-addr command in the /etc/llttab file
 - Selecting UDP ports
 - Configuring the netmask for LLT
 - Configuring the broadcast address for LLT
 - Sample configuration: direct-attached links
 - Sample configuration: links crossing IP routers
 
 - Manually configuring LLT over UDP using IPv6
 - LLT over UDP sample /etc/llttab
 
 - Appendix D. Migrating LLT links from IPv4 to IPv6 or dual-stack
 - Appendix E. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
 - Manually configuring passwordless ssh
 - Setting up ssh and rsh connection using the installer -comsetup command
 - Setting up ssh and rsh connection using the pwdutil.pl utility
 - Restarting the ssh session
 - Enabling rsh for AIX
 
 - Appendix F. Installation script options
 - Appendix G. Troubleshooting VCS configuration
- Restarting the installer after a failed network connection
 - Cannot launch the cluster view link
 - Starting and stopping processes for the InfoScale products
 - Installer cannot create UUID for the cluster
 - LLT startup script displays errors
 - The vxfentsthdw utility fails for Active/Passive arrays when you test disks in raw format
 - The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
 - Issues during fencing startup on VCS cluster nodes set up for server-based fencing
 
 - Appendix H. Sample VCS cluster setup diagrams for CP server-based I/O fencing
 - Appendix I. Changing NFS server major numbers for VxVM volumes
 - Appendix J. Upgrading the Steward process
 
 
Response file variables to upgrade VCS
Table: Response file variables specific to upgrading VCS lists the response file variables that you can define to upgrade VCS.
Table: Response file variables specific to upgrading VCS
Variable  | List or Scalar  | Description  | 
|---|---|---|
CFG{opt}{upgrade}  | Scalar  | Upgrades VCS filesets. (Required)  | 
CFG{accepteula}  | Scalar  | Specifies whether you agree with EULA.pdf on the media. (Required)  | 
CFG{systems}  | List  | List of systems on which the product is to be upgraded. (Required)  | 
CFG{defaultaccess}  | Scalar (optional)  | Defines if the user chooses to grant read access for VCS cluster information to everyone.  | 
CFG{key}  | Scalar (optional)  | Stores the keyless key you want to register.  | 
CFG{vcs_allowcomms}  | Scalar  | Indicates whether or not to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). (Required)  | 
CFG{opt}{keyfile}  | Scalar  | Defines the location of an ssh keyfile that is used to communicate with all remote systems. (Optional)  | 
CFG{opt}{pkgpath}  | Scalar  | Defines a location, typically an NFS mount, from which all remote systems can install product filesets. The location must be accessible from all target systems. (Optional)  | 
CFG{opt}{tmppath}  | Scalar  | Defines the location where a working directory is created to store temporary files and the filesets that are needed during the install. The default location is /opt/VRTStmp. (Optional)  | 
CFG{secusrgrps}  | List  | Defines the user groups which get read access to the cluster. (Optional)  | 
CFG{opt}{logpath}  | Scalar  | Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs. Note: The installer copies the response files and summary files also to the specified logpath location. (Optional)  | 
CFG{opt}{rsh}  | Scalar  | Defines that rsh must be used instead of ssh as the communication method between systems. (Optional)  | 
CFG{opt}{online_upgrade}  | Scalar  | Set the value to 1 for online upgrades.  | 
CFG{subcluster} {#}  | List  | Defines the series of the sub-cluster division. The index # indicates the order in which the rolling upgrade is performed and hence you must define it you are upgrading a cluster that run different versions of the VCS engine. The index # value starts from 0. Each subcluster entry in the response file defines a list of nodes to be upgraded in that subcluster. (Required)  |