Storage Foundation for Oracle® RAC 8.0.2 Configuration and Upgrade Guide - AIX
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Creation of SF Oracle RAC configuration files
- Stopping and starting SF Oracle RAC processes
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring SFDB
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Configuring SF Oracle RAC using response files
- Response file variables to configure SF Oracle RAC
- Sample response file for configuring SF Oracle RAC
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Configuring CP server using response files
- Response file variables to configure CP server
- Sample response file for configuring the CP server on SFHA cluster
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- About phased upgrade
- Performing a phased upgrade of SF Oracle RAC from version 7.3.1 and later release
- Step 1: Performing pre-upgrade tasks on the first half of the cluster
- Step 2: Upgrading the first half of the cluster
- Step 3: Performing pre-upgrade tasks on the second half of the cluster
- Step 4: Performing post-upgrade tasks on the first half of the cluster
- Step 5: Upgrading the second half of the cluster
- Step 6: Performing post-upgrade tasks on the second half of the cluster
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Relinking Oracle RAC libraries with the SF Oracle RAC libraries
- Setting or changing the product license level
- Resetting DAS disk names to include host name in FSS environments
- Upgrading disk layout versions
- CVM master node needs to assume the logowner role for VCS managed VVR resources
- Switching on Quotas
- Upgrading the disk group version
- Section IV. Installation of Oracle RAC
- Before installing Oracle RAC
- Important preinstallation information for Oracle RAC
- About preparing to install Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Identifying the public virtual IP addresses for use by Oracle
- Setting the kernel parameters
- Verifying that filesets and patches required by Oracle are installed
- Verifying the user nobody exists
- Launching the SF Oracle RAC installer
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC
- Verifying that multicast is functional on all private network interfaces
- Creating Oracle Clusterware/Grid Infrastructure and Oracle database home directories manually
- Setting up user equivalence
- Verifying whether the Veritas Membership library is linked to Oracle libraries
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Adding Oracle RAC patches or patchsets
- Configuring the CSSD resource
- Preventing automatic startup of Oracle Clusterware/Grid Infrastructure
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Creating the Oracle RAC database
- Configuring VCS service groups for Oracle RAC
- Preventing automatic database startup
- Removing existing PrivNIC or MultiPrivNIC resources
- Removing permissions for communication
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- After adding the new node
- Configuring server-based fencing on the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- Configuring the ClusterService group for the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC on the new node
- Adding the new node to Oracle RAC
- Adding nodes to a cluster that is using authentication for SFDB tools
- Updating the Storage Foundation for Databases (SFDB) repository after adding a node
- Sample configuration file for adding a node to the cluster
- Removing a node from SF Oracle RAC clusters
- About removing a node from a cluster
- Removing a node from a cluster
- Modifying the VCS configuration files on existing nodes
- Modifying the Cluster Volume Manager (CVM) configuration on the existing nodes to remove references to the deleted node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Updating the Storage Foundation for Databases (SFDB) repository after removing a node
- Sample configuration file for removing a node from the cluster
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Disaster recovery options for SF Oracle RAC
- Hardware requirements for campus cluster
- Supported replication technologies for global clusters
- About setting up a campus cluster for disaster recovery
- About setting up a global cluster environment for SF Oracle RAC
- About configuring a parallel global cluster using Volume Replicator (VVR) for replication
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- About setting tunable parameters using the installer or a response file
- Setting tunables for an installation, configuration, or upgrade
- Setting tunables with no other installer-related operations
- Setting tunables with an un-integrated response file
- Preparing the tunables file
- Setting parameters for the tunables file
- Tunables value parameter definitions
- Appendix C. Sample installation and configuration values
- About the installation and configuration worksheets
- SF Oracle RAC worksheet
- Oracle RAC worksheet
- Replicated cluster using VVR worksheet
- Replicated cluster using SRDF worksheet
- Required installation information for Oracle Clusterware/Grid Infrastructure
- Required installation information for Oracle database
- Appendix D. Configuration files
- About VCS configuration file
- About the LLT and GAB configuration files
- About I/O fencing configuration files
- Sample configuration files
- sfrac02_main.cf file
- sfrac03_main.cf file
- sfrac04_main.cf file
- sfrac05_main.cf file
- sfrac06_main.cf file
- sfrac07_main.cf and sfrac08_main.cf files
- sfrac09_main.cf and sfrac10_main.cf files
- sfrac11_main.cf file
- sfrac12_main.cf and sfrac13_main.cf files
- sfrac14_main.cf file
- sfrac15_main.cf and sfrac16_main.cf files
- sfrac17_main.cf file
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling rsh for AIX
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Startup and shutdown options for the pluggable database (PDB)
- Recommended startup modes for pluggable database (PDB) based on container database (CDB) startup modes
- Monitor options for the Oracle agent in traditional database and container database
- Monitor for the pluggable database
- Info entry point for Cluster Server agent for Oracle
- Action entry point for Cluster Server agent for Oracle
- Resource type definition for the Oracle agent
- Netlsnr agent functions
- Resource type definition for the Netlsnr agent
- ASMDG agent functions
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Appendix J. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Configuring VCS service groups manually for container Oracle databases
This section describes the steps to configure the VCS service group manually for container Oracle databases.
See Figure: Service group configuration with the VCS Oracle agent.
The following procedure assumes that you have created the database.
To configure the Oracle service group manually for container Oracle databases
- Change the cluster configuration to read-write mode:
# haconf -makerw
- Add the service group to the VCS configuration:
# hagrp -add oradb_grpname
- Modify the attributes of the service group:
# hagrp -modify oradb_grpname Parallel 1
# hagrp -modify oradb_grpname SystemList node_name1 0 node_name2 1
# hagrp -modify oradb_grpname AutoStartList node_name1 node_name2
- Add the CVMVolDg resource for the service group:
# hares -add oradbdg_resname CVMVolDg oradb_grpname
- Modify the attributes of the CVMVolDg resource for the service group:
# hares -modify oradbdg_resname CVMDiskGroup oradb_dgname # hares -modify oradbdg_resname CVMActivation sw # hares -modify oradbdg_resname CVMVolume oradb_volname
- Add the CFSMount resource for the service group:
# hares -add oradbmnt_resname CFSMount oradb_grpname
- Modify the attributes of the CFSMount resource for the service group:
# hares -modify oradbmnt_resname MountPoint "oradb_mnt" # hares -modify oradbmnt_resname BlockDevice \ "/dev/vx/dsk/oradb_dgname/oradb_volname"
- Add the container and pluggable Oracle RAC resources to the service group:
# hares -add cdb_resname Oracle oradb_grpname
# hares -add pdb_resname Oracle oradb_grpname
- Modify the attributes of the container and pluggable Oracle resources for the service group:
For container Oracle resource:
# hares -modify cdb_resname Owner oracle # hares -modify cdb_resname Home "db_home" # hares -modify cdb_resname StartUpOpt SRVCTLSTART # hares -modify cdb_resname ShutDownOpt SRVCTLSTOP
For pluggable Oracle resource:
# hares -modify pdb_resname Owner oracle # hares -modify pdb_resname Home "db_home" # hares -modify pdb_resname StartUpOpt STARTUP # hares -modify pdb_resname ShutDownOpt IMMEDIATE
For container databases that are administrator-managed, perform the following steps:
Localize the Sid attribute for the container Oracle resource:
# hares -local cdb_resname Sid
Set the Sid attributes for the container Oracle resource on each system:
# hares -modify cdb_resname Sid oradb_sid_node1 -sys node_name1 # hares -modify cdb_resname Sid oradb_sid_node2 -sys node_name2
For pluggable databases that reside in administrator-managed container databases, perform the following steps:
Localize the Sid attribute for the pluggable Oracle resource:
# hares -local pdb_resname Sid
Set the Sid attributes for the pluggable Oracle resource on each system:
# hares -modify pdb_resname Sid oradb_sid_node1 -sys node_name1 # hares -modify pdb_resname Sid oradb_sid_node2 -sys node_name2
Set the PDBName attribute for the pluggable Oracle database:
# hares -modify pdb_resname PDBName pdbname
For container databases that are policy-managed, perform the following steps:
Modify the attributes of the container Oracle resource for the service group:
# hares -modify cdb_resname DBName db_name # hares -modify cdb_resname ManagedBy POLICY
Set the Sid attribute to the Sid prefix for the container Oracle resource on all systems:
# hares -modify cdb_resname Sid oradb_sid_prefix
Note:
The Sid prefix is displayed on the confirmation page during database creation. The prefix can also be determined by running the following command :
# grid_home/bin/crsctl status resource ora.db_name.db -f | grep GEN_USR_ORA_INST_NAME@ | tail -1 | sed 's/.*=//' | sed 's/_[0-9]$//'
Set the IntentionalOffline attribute for the resource to 1 and make sure that the health check monitoring is disabled:
# hares -override cdb_resname IntentionalOffline # hares -modify cdb_resname IntentionalOffline 1 # hares -modify cdb_resname MonitorOption 0
For pluggable databases that reside in policy-managed container databases, perform the following steps:
Modify the attributes of the pluggable Oracle resource for the service group:
# hares -modify pdb_resname DBName db_name # hares -modify pdb_resname ManagedBy POLICY
Set the Sid attribute to the Sid prefix for the pluggable Oracle resource on all systems:
# hares -modify pdb_resname Sid oradb_sid_prefix
Note:
The Sid prefix is displayed on the confirmation page during database creation. The prefix can also be determined by running the following command :
# grid_home/bin/crsctl status resource ora.db_name.db -f | grep GEN_USR_ORA_INST_NAME@ | tail -1 | sed 's/.*=//' | sed 's/_[0-9]$//'
Set the IntentionalOffline attribute for the resource to 1 and make sure that the health check monitoring is disabled:
# hares -override pdb_resname IntentionalOffline # hares -modify pdb_resname IntentionalOffline 1 # hares -modify pdb_resname MonitorOption 0
Set the PDBName attribute for the pluggable Oracle database:
# hares -modify pdb_resname PDBName pdbname
- Set the dependency between the pluggable database resources and the corresponding container database resource:
# hares -link pdb_resname cdb_resname
Repeat this step for each pluggable database resource in a container database.
- Repeat steps 8 to 12 for each container database.
- Set the dependencies between the CFSMount resource and the CVMVolDg resource for the Oracle service group:
# hares -link oradbmnt_resname oradbdg_resname
- Set the dependencies between the Oracle resource and the CFSMount resource for the Oracle service group:
# hares -link db_resname oradbmnt_resname
- Create an online local firm dependency between the oradb1_grp service group and the cvm service group:
# hagrp -link oradb_grpname cvm_grpname online local firm
- Enable the Oracle service group:
# hagrp -enableresources oradb_grpname
- Change the cluster configuration to the read-only mode:
# haconf -dump -makero
- Bring the Oracle service group online on all the nodes:
# hagrp -online oradb_grpname -any
Note:
For policy-managed databases: When VCS starts or when the administrator attempts to bring the Oracle resource online, if the server is not part of the server pool associated with the database, the resource will remain offline. If Oracle Grid Infrastructure decides to move the server from the server pool, the database will be brought offline by the Oracle Grid Infrastructure and the oracle resource moves to offline state.
For more information and instructions on configuring the service groups using the CLI:
See the Cluster Server Administrator's Guide.