Please enter search query.
 
              Search <book_title>...
            
 
          Cluster Server 7.4.1 Configuration and Upgrade Guide - Solaris
                Last Published: 
				2019-06-18
                
              
              
                Product(s): 
				InfoScale & Storage Foundation (7.4.1)
                 
              
              
                Platform: Solaris
              
            - Section I. Configuring Cluster Server using the script-based installer- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity- About planning to configure I/O fencing
- Setting up the CP server
 
- Configuring VCS- Overview of tasks to configure VCS using the product installer
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring VCS in secure mode
- Setting up trust relationships for your VCS cluster
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the VCS configuration
- About Veritas License Audit Tool
- Verifying and updating licenses on the system
 
- Configuring VCS clusters for data integrity- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
 
 
- Section II. Automated configuration using response files- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
 
 
- Section III. Manual configuration- Manually configuring VCS- About configuring VCS manually
- Configuring LLT manually
- Configuring GAB manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Starting LLT, GAB, and VCS after manual configuration
- About configuring cluster using VCS Cluster Configuration wizard
- Before configuring a VCS cluster using the VCS Cluster Configuration wizard
- Launching the VCS Cluster Configuration wizard
- Configuring a cluster by using the VCS cluster configuration wizard
- Adding a system to a VCS cluster
- Modifying the VCS configuration
 
- Manually configuring the clusters for data integrity- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually- Preparing the CP servers manually for use by the VCS cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the VCS cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
 
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
 
 
- Manually configuring VCS
- Section IV. Upgrading VCS- Planning to upgrade VCS- About upgrading to VCS 7.4.1
- Upgrading VCS in secure enterprise environments
- Supported upgrade paths
- Considerations for upgrading secure VCS 6.x clusters to VCS 7.4.1
- Considerations for upgrading VCS to 7.4.1 on systems configured with an Oracle resource
- Considerations for upgrading secure VCS clusters to VCS 7.4.1
- Considerations for upgrading CP servers
- Considerations for upgrading CP clients
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
 
- Performing a VCS upgrade using the installer- Before upgrading VCS using the script-based installer
- Upgrading VCS using the product installer
- Upgrading to 2048 bit key and SHA256 signature certificates
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates- Deleting certificates of non-root users after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing WAC communication in global clusters after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing CP server and CP client communication after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing trust with Steward after upgrading to 2048 bit key and SHA256 signature certificates
 
- Upgrading Steward to 2048 bit key and SHA256 signature certificates
 
- Performing an online upgrade
- Performing a phased upgrade of VCS- About phased upgrade
- Performing a phased upgrade using the product installer- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
 
 
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
 
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes- Adding a node to a single-node cluster- Adding a node to a single-node cluster
 
- Adding a node to a multi-node VCS cluster- Adding nodes using the VCS installer
- Manually adding a node to a cluster- Setting up the hardware
- Installing the VCS software manually when adding a node
- Setting up the node to run in secure mode
- Configuring LLT and GAB when adding a node to the cluster
- Configuring I/O fencing on the new node
- Adding the node to the existing cluster
- Starting VCS and verifying the cluster
- Adding a node using response files
 
 
- Removing a node from a VCS cluster- Removing a node from a VCS cluster- Verifying the status of nodes and service groups
- Deleting the departing node from VCS configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
 
 
- Removing a node from a VCS cluster
 
- Adding a node to a single-node cluster
- Section VI. Installation reference- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
 
- Manually configuring LLT over UDP using IPv6
- LLT over UDP sample /etc/llttab
 
- Appendix D. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix E. Configuring the secure shell or the remote shell for communications- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling and disabling rsh for Solaris
 
- Appendix F. Installation script options
- Appendix G. Troubleshooting VCS configuration- Restarting the installer after a failed network connection
- Cannot launch the cluster view link
- Starting and stopping processes for the Veritas InfoScale products
- Installer cannot create UUID for the cluster
- LLT startup script displays errors
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Issues during fencing startup on VCS cluster nodes set up for server-based fencing
 
- Appendix H. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix I. Reconciling major/minor numbers for NFS shared disks
- Appendix J. Upgrading the Steward process
 
Sample /etc/vxfenmode file for non-SCSI-3 fencing
#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
# scsi3      - use scsi3 persistent reservation disks
# customized - use script based customized fencing
# disabled   - run the driver but don't do any actual fencing
#
vxfen_mode=customized
# vxfen_mechanism determines the mechanism for customized I/O 
# fencing that should be used.
# 
# available options:
# cps      - use a coordination point server with optional script
#            controlled scsi3 disks
#
vxfen_mechanism=cps
#
# scsi3_disk_policy determines the way in which I/O fencing
# communicates with the coordination disks. This field is
# required only if customized coordinator disks are being used.
#
# available options:
# dmp - use dynamic multipathing
#
scsi3_disk_policy=dmp
# 
# Seconds for which the winning sub cluster waits to allow for the
# losing subcluster to panic & drain I/Os. Useful in the absence of
# SCSI3 based data disk fencing loser_exit_delay=55
#
# Seconds for which vxfend process wait for a customized fencing
# script to complete. Only used with vxfen_mode=customized
# vxfen_script_timeout=25
#
# vxfen_honor_cp_order determines the order in which vxfen 
# should use the coordination points specified in this file.
#
# available options:
# 0	- vxfen uses a sorted list of coordination points specified
# in this file, the order in which coordination points are specified
# does not matter.
#	  (default)
# 1	- vxfen uses the coordination points in the same order they are
# 	  specified in this file
# Specify 3 or more odd number of coordination points in this file, 
# each one in its own line. They can be all-CP servers, all-SCSI-3 
# compliant coordinator disks, or a combination of CP servers and 
# SCSI-3 compliant coordinator disks.
# Please ensure that the CP server coordination points are 
# numbered sequentially and in the same order on all the cluster
# nodes.
#
# Coordination Point Server(CPS) is specified as follows:
#
# 	cps<number>=[<vip/vhn>]:<port>
#
# If a CPS supports multiple virtual IPs or virtual hostnames
# over different subnets, all of the IPs/names can be specified
# in a comma separated list as follows:
#
#	cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>,
# ...,[<vip_n/vhn_n>]:<port_n>
#
# Where,
#	<number>
#		is the serial number of the CPS as a coordination point; must
#		start with 1.
#	<vip>
#		is the virtual IP address of the CPS, must be specified in
#		square brackets ("[]").
#	<vhn>
#		is the virtual hostname of the CPS, must be specified in square
#		brackets ("[]").
#	<port>
#		is the port number bound to a particular <vip/vhn> of the CPS.
#		It is optional to specify a <port>. However, if specified, it
#		must follow a colon (":") after <vip/vhn>. If not specified, the
#		colon (":") must not exist after <vip/vhn>.
#
# For all the <vip/vhn>s which do not have a specified <port>,
# a default port can be specified as follows:
#
#	port=<default_port>
#
# 	Where <default_port> is applicable to all the <vip/vhn>s for which a
# 	<port> is not specified. In other words, specifying <port> with a
# 	<vip/vhn> overrides the <default_port> for that <vip/vhn>.
#	If the <default_port> is not specified, and there are <vip/vhn>s for
#	which <port> is not specified, then port number 14250 will be used
#	for such <vip/vhn>s.
#
# Example of specifying CP Servers to be used as coordination points:
#	port=57777
#	cps1=[192.168.0.23],[192.168.0.24]:58888,[cps1.company.com]
# 	cps2=[192.168.0.25]
#	cps3=[cps2.company.com]:59999
#
#	In the above example,
#	- port 58888 will be used for vip [192.168.0.24]
#	- port 59999 will be used for vhn [cps2.company.com], and
#	- default port 57777 will be used for all remaining <vip/vhn>s:
#	   [192.168.0.23]
#	   [cps1.company.com]
#	   [192.168.0.25]
#	- if default port 57777 were not specified, port 14250 would be
#  used for all remaining <vip/vhn>s:
#	   [192.168.0.23]
#	   [cps1.company.com]
#	   [192.168.0.25]
#
# SCSI-3 compliant coordinator disks are specified as:
#	  
# 	vxfendg=<coordinator disk group name>
#	Example:
#		vxfendg=vxfencoorddg
#
# Examples of different configurations:
# 	1. All CP server coordination points
#	cps1=
#	cps2=
#	cps3=
#
#	2. A combination of CP server and a disk group having two SCSI-3 
#	coordinator disks 
#	cps1=
#	vxfendg=
#	Note: The disk group specified in this case should have two disks
#
#	3. All SCSI-3 coordinator disks
#	vxfendg=
#	Note: The disk group specified in case should have three disks 
# cps1=[cps1.company.com]
# cps2=[cps2.company.com]
# cps3=[cps3.company.com]
# port=443