Cluster Server 7.4 Agent for Oracle Installation and Configuration Guide - Linux
- Introducing the Cluster Server agent for Oracle
- About the Cluster Server agent for Oracle
- How the agent makes Oracle highly available
- About Cluster Server agent functions for Oracle
- Oracle agent functions
- How the Oracle agent supports health check monitoring
- ASMInst agent functions
- Oracle agent functions
- Installing and configuring Oracle
- About VCS requirements for installing Oracle
- About Oracle installation tasks for VCS
- Installing ASM binaries for Oracle 11gR2 or 12c in a VCS environment
- Configuring Oracle ASM on the first node of the cluster
- Installing Oracle binaries on the first node of the cluster
- Installing and removing the agent for Oracle
- Configuring VCS service groups for Oracle
- Configuring Oracle instances in VCS
- Before you configure the VCS service group for Oracle
- Configuring the VCS service group for Oracle
- Setting up detail monitoring for VCS agents for Oracle
- Enabling and disabling intelligent resource monitoring for agents manually
- Configuring VCS service groups for Oracle using the Veritas High Availability Configuration wizard
- Understanding service group configurations
- Understanding configuration scenarios
- Troubleshooting
- Sample configurations
- Administering VCS service groups for Oracle
- Pluggable database (PDB) migration
- Troubleshooting Cluster Server agent for Oracle
- Verifying the Oracle health check binaries and intentional offline for an instance of Oracle
- Appendix A. Resource type definitions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMInst agent
- Resource type definition for the ASMDG agent
- Appendix B. Sample configurations
- Sample single Oracle instance configuration
- Sample multiple Oracle instances (single listener) configuration
- Sample multiple instance (multiple listeners) configuration
- Sample Oracle configuration with shared server support
- Sample Oracle ASM configurations
- Appendix C. Best practices
- Appendix D. Using the SPFILE in a VCS cluster for Oracle
- Appendix E. OHASD in a single instance database environment
Before configuring application monitoring
Note the following points before configuring application monitoring on a virtual machine:
All the Oracle and Net Listener instances that you want to configure must be running on the system from where the High Availability Configuration wizard is invoked.
The wizard discovers the disks which are attached and the storage which is currently available. Ensure that the shared storage used by the application is available before you invoke the wizard.
All the required disks must be attached and all the storage components must be available.
The Oracle Home directory owner must exist on all the failover nodes.
The Oracle UID must be the same across all the nodes in the cluster.
If the Oracle Database is installed on local disks, the Oracle Home directory must exist on all the failover targets.
If the Oracle Database is installed on shared disks, then the corresponding mount point must be selected when you configure the Oracle instance using the High Availability Configuration wizard.
You must not restore a snapshot on a virtual machine where an application is currently online, if the snapshot was taken when the application was offline on that virtual machine. Doing this may cause an unwanted failover. This also applies in the reverse scenario; you should not restore a snapshot where the application was online on a virtual machine, where the application is currently offline. This may lead to a misconfiguration where the application is online on multiple systems simultaneously.
While creating a VCS cluster in a virtual environment, you must configure the cluster communication link over a public network in addition to private adapters. The link using the public adapter should be assigned as a low-priority link. This helps in case the private network adapters fail, leading to a condition where the systems are unable to connect to each other, consider that the other system has faulted, and then try to gain access to the disks, thereby leading to an application fault.
You must not select teamed network adapters for cluster communication. If your configuration contains teamed network adapters, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed network adapters. A teamed network adapter is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address, due to which you may experience the following issues:
SSO configuration failure.
The wizard may fail to discover the specified network adapters.
The wizard may fail to discover/validate the specified system name.
Verify that the boot sequence of the virtual machine is such that the boot disk (OS hard disk) is placed before the removable disks. If the sequence places the removable disks before the boot disk, the virtual machine may not reboot after an application failover. The reboot may halt with an "OS not found" error. This issue occurs because during the application failover the removable disks are detached from the current virtual machine and are attached on the failover target system.
Verify that the disks used by the application that you want to monitor are attached to non-shared controllers so that they can be deported from the system and imported to another system.
If multiple types of SCSI controllers are attached to the virtual machines, then storage dependencies of the application cannot be determined and configured.
The term 'shared storage' refers to the removable disks attached to the virtual machine. It does not refer to disks attached to the shared controllers of the virtual machine.
If you want to configure the storage dependencies of the application through the wizard, the LVM volumes or VxVM volumes used by the application should not be mounted on more than one mount point path.
The host name of the system must be resolvable through the DNS server or, locally, using /etc/hosts file entries.
By default, the controller ID and port must remain the same on all cluster nodes. If you do not want the resource to have the same controller ID and port, you should localize the attribute for all cluster nodes. Localization allows all cluster nodes to have different controller IDs and port numbers. For more information about localizing an attribute, refer to the Cluster Server Administrator's Guide.