Cluster Server 8.0.2 Agent for Oracle Installation and Configuration Guide - Linux
- Introducing the Cluster Server agent for Oracle
- About the Cluster Server agent for Oracle
- About the agent for Oracle ASM
- Supported software for VCS agent for Oracle
- How the agent makes Oracle highly available
- About Cluster Server agent functions for Oracle
- Oracle agent functions
- Startup and shutdown options for the Oracle agent
- Monitor options for the Oracle agent in traditional database and container database
- Startup and shutdown options for the pluggable database (PDB)
- Monitor for the pluggable database
- Recommended startup modes for pluggable database (PDB) based on container database (CDB) startup modes
- How the agent handles Oracle error codes during detail monitoring
- Info entry point for Cluster Server agent for Oracle
- Action entry point for Cluster Server agent for Oracle
- How the Oracle agent supports health check monitoring
- Netlsnr agent functions
- ASMInst agent functions
- ASMDG agent functions
- Oracle agent functions
- Typical Oracle configuration in a VCS cluster
- About setting up Oracle in a VCS cluster
- Installing and configuring Oracle
- About installing Oracle in a VCS environment
- Before you install Oracle in a VCS environment
- About VCS requirements for installing Oracle
- About Oracle installation tasks for VCS
- Installing ASM binaries in a VCS environment
- Configuring Oracle ASM on the first node of the cluster
- Configuring and starting up ASM on remaining nodes
- Installing Oracle binaries on the first node of the cluster
- Configuring the Oracle database
- Copying the $ORACLE_BASE/admin/SID directory
- Copying the Oracle ASM initialization parameter file
- Verifying access to the Oracle database
- Installing and removing the agent for Oracle
- Configuring VCS service groups for Oracle
- About configuring a service group for Oracle
- Configuring Oracle instances in VCS
- Before you configure the VCS service group for Oracle
- Configuring the VCS service group for Oracle
- Configuring VCS service groups for Oracle using the Veritas High Availability Configuration wizard
- Typical VCS cluster configuration in a virtual environment
- About configuring application monitoring using the Veritas High Availability solution for VMware
- Getting ready to configure VCS service groups using the wizard
- Before configuring application monitoring
- Launching the Veritas High Availability Configuration wizard
- Configuring the agent to monitor Oracle
- Understanding service group configurations
- Understanding configuration scenarios
- Veritas High Availability Configuration wizard limitations
- Troubleshooting
- Sample configurations
- Administering VCS service groups for Oracle
- Pluggable database (PDB) migration
- Troubleshooting Cluster Server agent for Oracle
- About troubleshooting Cluster Server agent for Oracle
- Error messages common to the Oracle and Netlsnr agents
- Error messages specific to the Oracle agent
- Error messages specific to the Netlsnr agent
- Error messages specific to the ASMInst agent
- Error messages specific to the ASMDG agent
- Troubleshooting issues specific to Oracle in a VCS environment
- Verifying the Oracle health check binaries and intentional offline for an instance of Oracle
- Disabling IMF for a PDB resource
- Appendix A. Resource type definitions
- About the resource type and attribute definitions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMInst agent
- Resource type definition for the ASMDG agent
- Appendix B. Sample configurations
- About the sample configurations for Oracle enterprise agent
- Sample single Oracle instance configuration
- Sample multiple Oracle instances (single listener) configuration
- Sample multiple instance (multiple listeners) configuration
- Sample Oracle configuration with shared server support
- Sample Oracle ASM configurations
- Sample configuration of Oracle pluggable database (PDB) resource in main.cf
- Sample configuration of migratable Oracle pluggable database (PDB) resource in main.cf
- Sample Configuration of Oracle supported by systemD
- Sample configuration of ASMInst supported by systemD
- Appendix C. Best practices
- Appendix D. Using the SPFILE in a VCS cluster for Oracle
- Appendix E. OHASD in a single instance database environment
Before configuring application monitoring
Note the following points before configuring application monitoring on a virtual machine:
All the Oracle and Net Listener instances that you want to configure must be running on the system from where the Veritas High Availability Configuration wizard is invoked.
The wizard discovers the disks which are attached and the storage which is currently available. Ensure that the shared storage used by the application is available before you invoke the wizard.
All the required disks must be attached and all the storage components must be available.
The Oracle Home directory owner must exist on all the failover nodes.
The Oracle UID must be the same across all the nodes in the cluster.
If the Oracle Database is installed on local disks, the Oracle Home directory must exist on all the failover targets.
If the Oracle Database is installed on shared disks, then the corresponding mount point must be selected when you configure the Oracle instance using the Veritas High Availability Configuration wizard.
You must not restore a snapshot on a virtual machine where an application is currently online, if the snapshot was taken when the application was offline on that virtual machine. Doing this may cause an unwanted failover. This also applies in the reverse scenario; you should not restore a snapshot where the application was online on a virtual machine, where the application is currently offline. This may lead to a misconfiguration where the application is online on multiple systems simultaneously.
While creating a VCS cluster in a virtual environment, you must configure the cluster communication link over a public network in addition to private adapters. The link using the public adapter should be assigned as a low-priority link. This helps in case the private network adapters fail, leading to a condition where the systems are unable to connect to each other, consider that the other system has faulted, and then try to gain access to the disks, thereby leading to an application fault.
You must not select teamed network adapters for cluster communication. If your configuration contains teamed network adapters, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed network adapters. A teamed network adapter is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address, due to which you may experience the following issues:
SSO configuration failure.
The wizard may fail to discover the specified network adapters.
The wizard may fail to discover/validate the specified system name.
Verify that the boot sequence of the virtual machine is such that the boot disk (OS hard disk) is placed before the removable disks. If the sequence places the removable disks before the boot disk, the virtual machine may not reboot after an application failover. The reboot may halt with an "OS not found" error. This issue occurs because during the application failover the removable disks are detached from the current virtual machine and are attached on the failover target system.
Verify that the disks used by the application that you want to monitor are attached to non-shared controllers so that they can be deported from the system and imported to another system.
If multiple types of SCSI controllers are attached to the virtual machines, then storage dependencies of the application cannot be determined and configured.
The term 'shared storage' refers to the removable disks attached to the virtual machine. It does not refer to disks attached to the shared controllers of the virtual machine.
If you want to configure the storage dependencies of the application through the wizard, the LVM volumes or VxVM volumes used by the application should not be mounted on more than one mount point path.
The host name of the system must be resolvable through the DNS server or, locally, using /etc/hosts file entries.
By default, the controller ID and port must remain the same on all cluster nodes. If you do not want the resource to have the same controller ID and port, you should localize the attribute for all cluster nodes. Localization allows all cluster nodes to have different controller IDs and port numbers. For more information about localizing an attribute, refer to the Cluster Server Administrator's Guide.