Storage Foundation for Oracle® RAC 7.4.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Creation of SF Oracle RAC configuration files
- Stopping and starting SF Oracle RAC processes
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring SFDB
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Configuring SF Oracle RAC using response files
- Response file variables to configure SF Oracle RAC
- Sample response file for configuring SF Oracle RAC
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Configuring CP server using response files
- Response file variables to configure CP server
- Sample response file for configuring the CP server on SFHA cluster
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- About phased upgrade
- Performing a phased upgrade of SF Oracle RAC from version 6.2.1 and later release
- Step 1: Performing pre-upgrade tasks on the first half of the cluster
- Step 2: Upgrading the first half of the cluster
- Step 3: Performing pre-upgrade tasks on the second half of the cluster
- Step 4: Performing post-upgrade tasks on the first half of the cluster
- Step 5: Upgrading the second half of the cluster
- Step 6: Performing post-upgrade tasks on the second half of the cluster
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading SF Oracle RAC using Live Upgrade or Boot Environment upgrade
- Performing post-upgrade tasks
- Relinking Oracle RAC libraries with the SF Oracle RAC libraries
- Setting or changing the product license level
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- CVM master node needs to assume the logowner role for VCS managed VVR resources
- Switching on Quotas
- Upgrading the disk group version
- Section IV. Installation and upgrade of Oracle RAC
- Before installing Oracle RAC
- Important preinstallation information for Oracle RAC
- About preparing to install Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Identifying the public virtual IP addresses for use by Oracle
- Setting the kernel parameters
- Verifying that packages and patches required by Oracle are installed
- Verifying the user nobody exists
- Launching the SF Oracle RAC installer
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC 11.2.0.1
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions
- Verifying that multicast is functional on all private network interfaces
- Creating Oracle Clusterware/Grid Infrastructure and Oracle database home directories manually
- Setting up user equivalence
- Verifying whether the Veritas Membership library is linked to Oracle libraries
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Adding Oracle RAC patches or patchsets
- Configuring the CSSD resource
- Preventing automatic startup of Oracle Clusterware/Grid Infrastructure
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Creating the Oracle RAC database
- Configuring VCS service groups for Oracle RAC
- Preventing automatic database startup
- Removing existing PrivNIC or MultiPrivNIC resources
- Removing permissions for communication
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- After adding the new node
- Configuring server-based fencing on the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- Configuring the ClusterService group for the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions on the new node
- Adding the new node to Oracle RAC
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Sample configuration file for adding a node to the cluster
- Removing a node from SF Oracle RAC clusters
- About removing a node from a cluster
- Removing a node from a cluster
- Modifying the VCS configuration files on existing nodes
- Modifying the Cluster Volume Manager (CVM) configuration on the existing nodes to remove references to the deleted node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Sample configuration file for removing a node from the cluster
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Disaster recovery options for SF Oracle RAC
- Hardware requirements for campus cluster
- Supported replication technologies for global clusters
- About setting up a campus cluster for disaster recovery
- About setting up a global cluster environment for SF Oracle RAC
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- About setting tunable parameters using the installer or a response file
- Setting tunables for an installation, configuration, or upgrade
- Setting tunables with no other installer-related operations
- Setting tunables with an un-integrated response file
- Preparing the tunables file
- Setting parameters for the tunables file
- Tunables value parameter definitions
- Appendix C. Sample installation and configuration values
- About the installation and configuration worksheets
- SF Oracle RAC worksheet
- Oracle RAC worksheet
- Replicated cluster using VVR worksheet
- Replicated cluster using SRDF worksheet
- Required installation information for Oracle Clusterware/Grid Infrastructure
- Required installation information for Oracle database
- Appendix D. Configuration files
- About VCS configuration file
- About the LLT and GAB configuration files
- About I/O fencing configuration files
- Packaging related SMF services on Solaris 11
- Sample configuration files
- sfrac02_main.cf file
- sfrac03_main.cf file
- sfrac04_main.cf file
- sfrac05_main.cf file
- sfrac06_main.cf file
- sfrac07_main.cf and sfrac08_main.cf files
- sfrac09_main.cf and sfrac10_main.cf files
- sfrac11_main.cf file
- sfrac12_main.cf and sfrac13_main.cf files
- sfrac14_main.cf file
- sfrac15_main.cf and sfrac16_main.cf files
- sfrac17_main.cf file
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling and disabling rsh for Solaris
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- PrivNIC agent
- MultiPrivNIC agent
- Managing high availability of private interconnects
- Functions of the MultiPrivNIC agent
- Required attributes of the MultiPrivNIC agent
- States of the MultiPrivNIC agent
- Sample service group configuration with the MultiPrivNIC agent
- Type definition of the MultiPrivNIC resource
- Sample configuration of the MultiPrivNIC resource
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Startup and shutdown options for the pluggable database (PDB)
- Recommended startup modes for pluggable database (PDB) based on container database (CDB) startup modes
- Monitor options for the Oracle agent in traditional database and container database
- Monitor for the pluggable database
- Info entry point for Cluster Server agent for Oracle
- Action entry point for Cluster Server agent for Oracle
- Resource type definition for the Oracle agent
- Netlsnr agent functions
- Resource type definition for the Netlsnr agent
- ASMDG agent functions
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- SF Oracle RAC cluster with UDP IPC and PrivNIC agent
- SF Oracle RAC cluster for multiple databases with UDP IPC and MultiPrivNIC agent
- SF Oracle RAC cluster with isolated Oracle traffic and MultiPrivNIC agent
- SF Oracle RAC cluster with NIC bonding, UDP IPC, and PrivNIC agent
- Configuration diagrams for setting up server-based I/O fencing
Creating Oracle Clusterware/Grid Infrastructure and Oracle database home directories manually
You can create the Oracle Clusterware/Grid Infrastructure and Oracle database home directories on the local file system or on a local Veritas file system, or on a Veritas cluster file system. When the installer prompts for the home directories at the time of installing Oracle Clusterware/Grid Infrastructure and Oracle database, it creates the directories locally on each node, if they do not exist.
Note:
Veritas recommends that Oracle Clusterware and Oracle database binaries be installed local to each node in the cluster. For Oracle Grid Infrastructure binaries, Oracle requires that they be installed only on a local file system. Refer to the Oracle documentation for size requirements.
Table: List of directories lists the Oracle RAC directories you need to create:
Table: List of directories
Directory | Description |
|---|---|
Oracle Grid Infrastructure Home Directory (GRID_HOME) (For Oracle RAC 11g Release 2 and later versions) | The path to the home directory that stores the Oracle Grid Infrastructure binaries. The Oracle Universal Installer (OUI) installs Oracle Grid Infrastructure and Oracle ASM into this directory, also referred to as GRID_HOME. The directory must be owned by the installation owner of Oracle Grid Infrastructure (oracle or grid), with the permission set to 755. The path to the grid home directory must be the same on all nodes. Follow Oracle Optimal Flexible Architecture (OFA) guidelines while choosing the path. |
Oracle base directory (ORACLE_BASE) | The base directory that contains all the Oracle installations. For Oracle RAC 11g Release 2 and later versions, create separate Oracle base directories for the grid user and the Oracle user. It is recommended that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration. The path to the Oracle base directory must be the same on all nodes. The permission on the Oracle base directory must be at least 755. |
Oracle home directory (ORACLE_HOME) | The directory in which the Oracle database software is installed. The path to the Oracle home directory must be the same on all nodes. The permission on the Oracle home directory must be at least 755. |
Use one of the following options to create the directories:
Local file system | |
Cluster File System | See “To create the file system and directories on cluster file system for Oracle database”. |
To create the directories on the local file system
- Log in as the root user on each node.
Create a local file system and mount it using one of the following methods:
Using native operating system commands
For instructions, see the operating system documentation.
Using Veritas File System (VxFS) commands
As the root user, create a VxVM local diskgroup on each node.
# vxdg init vxvm_dg \ dg_name
Create separate volumes for Oracle Clusterware/Oracle Grid Infrastructure binaries and Oracle binaries.
# vxassist -g vxvm_dg make clus_volname size # vxassist -g vxvm_dg make ora_volname size
Create the file systems with the volumes.
# mkfs -F vxfs /dev/vx/rdsk/vxvm_dg/clus_volname # mkfs -F vxfs /dev/vx/rdsk/vxvm_dg/ora_volname
Mount the file system.
# mount -F vxfs /dev/vx/dsk/vxvm_dg/clus_volname \ clus_home # mount -F vxfs /dev/vx/dsk/vxvm_dg/ora_volname \ oracle_home
- Create the directories for Oracle RAC.
# mkdir -p grid_base # mkdir -p clus_home # mkdir -p oracle_base # mkdir -p oracle_home
- Set appropriate ownership and permissions for the directories.
# chown -R grid:oinstall grid_base # chmod -R 775 grid_base # chown -R grid:oinstall clus_home # chmod -R 775 clus_home
# chown -R oracle:oinstall oracle_base # chmod -R 775 oracle_base # chown -R oracle:oinstall oracle_home # chmod -R 775 oracle_home
- Add the resources to the VCS configuration.
See “To add the storage resources created on VxFS to the VCS configuration”.
- Repeat all the steps on each node of the cluster.
To add the storage resources created on VxFS to the VCS configuration
- Change the permissions on the VCS configuration file:
# haconf -makerw
- Configure the VxVM volumes under VCS:
# hares -add dg_resname DiskGroup cvm # hares -modify dg_resname DiskGroup vxvm_dg -sys node_name # hares -modify dg_resname Enabled 1
- Set up the file system under VCS:
# hares -add clusbin_mnt_resname Mount cvm
# hares -modify clusbin_mnt_resname MountPoint \ "clus_home"
# hares -modify clusbin_mnt_resname BlockDevice \ "/dev/vx/dsk/vxvm_dg/clus_volname" -sys node_name # hares -modify clusbin_mnt_resname FSType vxfs # hares -modify clusbin_mnt_resname FsckOpt "-n" # hares -modify clusbin_mnt_resname Enabled 1 # hares -add orabin_mnt_resname Mount cvm
# hares -modify orabin_mnt_resname MountPoint \ "oracle_home"
# hares -modify orabin_mnt_resname BlockDevice \ "/dev/vx/dsk/vxvm_dg/ora_volname" -sys node_name # hares -modify orabin_mnt_resname FSType vxfs # hares -modify orabin_mnt_resname FsckOpt "-n" # hares -modify orabin_mnt_resname Enabled 1
- Link the parent and child resources:
# hares -link clusbin_mnt_resname vxvm_dg # hares -link orabin_mnt_resname vxvm_dg
- Repeat all the steps on each node of the cluster.
To create the file system and directories on cluster file system for Oracle database
Perform the following steps on the CVM master node in the cluster.
- As the root user, create a VxVM shared disk group:
# vxdg -s init cvm_dg dg_name
- Create the volume for Oracle database:
# vxassist -g cvm_dg make ora_volname size
- Create the Oracle base directory and the Oracle home directory.
# mkdir -p oracle_base # mkdir -p oracle_home
- Create file system with the volume:
# mkfs -F vxfs /dev/vx/rdsk/cvm_dg/ora_volname
- Mount the file system. Perform this step on each node.
# mount -F vxfs -o cluster /dev/vx/dsk/cvm_dg/ora_volname \ oracle_home
- Change the ownership and permissions on all nodes of the cluster.
# chown -R oracle:oinstall oracle_base # chmod -R 775 oracle_base # chown -R oracle:oinstall oracle_home # chmod -R 775 oracle_home
- Add the CVMVolDg and CFSMount resources to the VCS configuration.
See “To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI”.
To add the CFSMount and CVMVolDg resources to the VCS configuration using CLI
- Change the permissions on the VCS configuration file:
# haconf -makerw
- Configure the CVM volumes under VCS:
# hares -add dg_resname CVMVolDg cvm
# hares -modify dg_resname Critical 0
# hares -modify dg_resname CVMDiskGroup cvm_dg
# hares -modify dg_resname CVMVolume -add ora_volname
# hares -modify dg_resname CVMActivation sw
- Set up the file system under VCS:
# hares -add orabin_mnt_resname CFSMount cvm # hares -modify orabin_mnt_resname Critical 0 # hares -modify orabin_mnt_resname MountPoint \ "oracle_home" # hares -modify orabin_mnt_resname BlockDevice \ "/dev/vx/dsk/cvm_dg/ora_volname"
- Link the parent and child resources:
# hares -link dg_resname cvm_clus
# hares -link orabin_mnt_resname dg_resname
# hares -link orabin_mnt_resname vxfsckd
- Enable the resources:
# hares -modify dg_resname Enabled 1
# hares -modify orabin_mnt_resname Enabled 1
# haconf -dump -makero
- Verify the resource configuration in the main.cf file.
- Verify that the resources are online on all systems in the cluster.
# hares -state dg_resname
# hares -state orabin_mnt_resname
Note:
At this point, the crsorabin_voldg resource is reported offline, and the underlying volumes are online. Therefore, you need to manually bring the resource online on each node.
To bring the resource online manually:
# hares -online dg_resname -sys node_name