Storage Foundation for Oracle® RAC 8.0.2 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Creation of SF Oracle RAC configuration files
- Stopping and starting SF Oracle RAC processes
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring SFDB
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Configuring SF Oracle RAC using response files
- Response file variables to configure SF Oracle RAC
- Sample response file for configuring SF Oracle RAC
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Configuring CP server using response files
- Response file variables to configure CP server
- Sample response file for configuring the CP server on SFHA cluster
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- About phased upgrade
- Performing a phased upgrade of SF Oracle RAC from version 7.3.1 and later release
- Step 1: Performing pre-upgrade tasks on the first half of the cluster
- Step 2: Upgrading the first half of the cluster
- Step 3: Performing pre-upgrade tasks on the second half of the cluster
- Step 4: Performing post-upgrade tasks on the first half of the cluster
- Step 5: Upgrading the second half of the cluster
- Step 6: Performing post-upgrade tasks on the second half of the cluster
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Relinking Oracle RAC libraries with the SF Oracle RAC libraries
- Re-joining the backup boot disk group into the current disk group
- Reverting to the backup boot disk group after an unsuccessful upgrade
- Setting or changing the product license level
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- CVM master node needs to assume the logowner role for VCS managed VVR resources
- Switching on Quotas
- Upgrading the disk group version
- Section IV. Installation of Oracle RAC
- Before installing Oracle RAC
- Important preinstallation information for Oracle RAC
- About preparing to install Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Identifying the public virtual IP addresses for use by Oracle
- Setting the kernel parameters
- Verifying that RPMs and patches required by Oracle are installed
- Verifying the user nobody exists
- Launching the SF Oracle RAC installer
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC
- Verifying that multicast is functional on all private network interfaces
- Creating Oracle Clusterware/Grid Infrastructure and Oracle database home directories manually
- Setting up user equivalence
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Adding Oracle RAC patches or patchsets
- Configuring the CSSD resource
- Preventing automatic startup of Oracle Clusterware/Grid Infrastructure
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Updating the CSS misscount setting
- Creating the Oracle RAC database
- Configuring VCS service groups for Oracle RAC
- Preventing automatic database startup
- Removing existing PrivNIC or MultiPrivNIC resources
- Removing permissions for communication
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- After adding the new node
- Configuring server-based fencing on the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- Configuring the ClusterService group for the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC on the new node
- Adding the new node to Oracle RAC
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Sample configuration file for adding a node to the cluster
- Removing a node from SF Oracle RAC clusters
- About removing a node from a cluster
- Removing a node from a cluster
- Modifying the VCS configuration files on existing nodes
- Modifying the Cluster Volume Manager (CVM) configuration on the existing nodes to remove references to the deleted node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Sample configuration file for removing a node from the cluster
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Disaster recovery options for SF Oracle RAC
- Hardware requirements for campus cluster
- Supported replication technologies for global clusters
- About setting up a campus cluster for disaster recovery
- About setting up a global cluster environment for SF Oracle RAC
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- About setting tunable parameters using the installer or a response file
- Setting tunables for an installation, configuration, or upgrade
- Setting tunables with no other installer-related operations
- Setting tunables with an un-integrated response file
- Preparing the tunables file
- Setting parameters for the tunables file
- Tunables value parameter definitions
- Appendix C. Sample installation and configuration values
- About the installation and configuration worksheets
- SF Oracle RAC worksheet
- Oracle RAC worksheet
- Replicated cluster using VVR worksheet
- Replicated cluster using SRDF worksheet
- Required installation information for Oracle Clusterware/Grid Infrastructure
- Required installation information for Oracle database
- Appendix D. Configuration files
- About VCS configuration file
- About the LLT and GAB configuration files
- About I/O fencing configuration files
- Sample configuration files
- sfrac02_main.cf file
- sfrac03_main.cf file
- sfrac04_main.cf file
- sfrac05_main.cf file
- sfrac06_main.cf file
- sfrac07_main.cf and sfrac08_main.cf files
- sfrac09_main.cf and sfrac10_main.cf files
- sfrac11_main.cf file
- sfrac12_main.cf and sfrac13_main.cf files
- sfrac14_main.cf file
- sfrac15_main.cf and sfrac16_main.cf files
- sfrac17_main.cf file
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for Linux
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Startup and shutdown options for the pluggable database (PDB)
- Recommended startup modes for pluggable database (PDB) based on container database (CDB) startup modes
- Monitor options for the Oracle agent in traditional database and container database
- Monitor for the pluggable database
- Info entry point for Cluster Server agent for Oracle
- Action entry point for Cluster Server agent for Oracle
- Resource type definition for the Oracle agent
- Netlsnr agent functions
- Resource type definition for the Netlsnr agent
- ASMDG agent functions
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Appendix J. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix K. Using LLT over RDMA
- Using LLT over RDMA
- About RDMA over RoCE or InfiniBand networks in a clustering environment
- How LLT supports RDMA capability for faster interconnects between applications
- Using LLT over RDMA: supported use cases
- Configuring LLT over RDMA
- Choosing supported hardware for LLT over RDMA
- Installing RDMA, InfiniBand or Ethernet drivers and utilities
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- LLT over RDMA sample /etc/llttab
- Verifying LLT configuration
- Troubleshooting LLT over RDMA
- IP addresses associated to the RDMA NICs do not automatically plumb on node restart
- Ping test fails for the IP addresses configured over InfiniBand interfaces
- After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
- The LLT module fails to start
Performing rolling upgrades in CVR environments
In a CVR environment, you perform the rolling upgrade on the secondary site first and then on the primary site.
To perform a rolling upgrade in a CVR environment
- Perform a rolling upgrade on the secondary sites first, without stopping the replication.
Check whether the cluster nodes have joined the cluster back after the upgrade.
Continue to monitor the replication status.
# vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Perform a rolling upgrade for all the nodes on the primary site by following these steps:
At the primary site, the VxFS file systems are always mounted. Therefore, if you directly begin the upgrade, the installer returns the following error:
CPI ERROR V-9-40-1480 Some VxFS file systems are mounted on mount points /<volume_mount_point> on node <node_name> and need to be unmounted before upgrade.
In this event, take the applications offline and unmount the file system manually on the indicated nodes, before you upgrade them.
Additionally, you may have to take the following actions on these nodes:
If any parallel service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
Make sure that CVM is the final child in the hierarchy, that it is left online, and rest of the hierarchy is taken offline on that node.
If any parent service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -dep cvm
Parent Child Relationship <global_application_name> cvm online local firm
<global_application_name> is the parent service group of the CVM; take it offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
If any local service groups exist in the hierarchy, switch them to another cluster node that is not being upgraded at the same time.
After you upgrade these nodes, mount the file system again and bring the VCS application online on the targeted node.
# /opt/VRTS/bin/cfsmount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -online <global_application_name> -sys <node_name>
Consequently, you may have to take the following actions on these upgraded nodes:
Bring the parallel service groups online.
Switch the local VCS service groups back.
Make sure to bring all the service groups, which were taken offline before the upgrade, online again.
Check the replication status and the system or the service group state, and only then proceed to perform the upgrade on the next node.
- Check the replication status on the primary site.
- Migrate primary role to the upgraded secondary and then proceed with upgrade on the secondary (old primary).
To upgrade disk group and disk layout versions on replication hosts
- Upgrade the disk group version on all the Secondaries for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk group version on the Primary for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk layout version (DLV) on the Primary for all the VxFS file systems.
# /opt/VRTS/bin/vxupgrade -n 17 <vxfs_mount_point_name>
# /opt/VRTS/bin/fstyp -v <disk_path_for_mount_point_volume>
The DLV upgrade is automatically replicated to the Secondaries.