Veritas InfoScale™ 7.3.1 Installation Guide - AIX
- Section I. Introduction to Veritas InfoScale
- Introducing Veritas InfoScale
- Licensing Veritas InfoScale
- Section II. Planning and preparation
- System requirements
- Hardware requirements
- Preparing to install
- Setting up the private network
- Setting up shared storage
- Planning the installation setup for SF Oracle RAC systems
- Planning your network configuration
- Planning the storage
- Planning the storage for Oracle RAC
- System requirements
- Section III. Installation of Veritas InfoScale
- Installing Veritas InfoScale using the installer
- Installing Veritas InfoScale using response files
- Installing Veritas Infoscale using operating system-specific methods
- Installing Veritas InfoScale using NIM and the installer
- Completing the post installation tasks
- Section IV. Uninstallation of Veritas InfoScale
- Uninstalling Veritas InfoScale using the installer
- Uninstalling Veritas InfoScale using response files
- Section V. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Troubleshooting installation issues
Planning the private network configuration for Oracle RAC
Oracle RAC requires a minimum of one private IP address on each node for Oracle Clusterware heartbeat.
For a11g and later versions, you must use UDP IPC for the database cache fusion traffic.
Veritas recommends using multiple private interconnects for load balancing the cache fusion traffic.
The private IP addresses of all nodes that are on the same physical network must be in the same IP subnet.
The following practices provide a resilient private network setup:
Configure Oracle Clusterware interconnects over LLT links to prevent data corruption.
In an Veritas InfoScale cluster, the Oracle Clusterware heartbeat link MUST be configured as an LLT link. If Oracle Clusterware and LLT use different links for their communication, then the membership change between VCS and Oracle Clusterware is not coordinated correctly. For example, if only the Oracle Clusterware links are down, Oracle Clusterware kills one set of nodes after the expiry of the css-misscount interval and initiates the Oracle Clusterware and database recovery, even before CVM and CFS detect the node failures. This uncoordinated recovery may cause data corruption.
Oracle Clusterware interconnects need to be protected against NIC failures and link failures. For Oracle RAC 184.108.40.206 versions, the PrivNIC or MultiPrivNIC agent can be used to protect against NIC failures and link failures, if multiple links are available. Even if link aggregation solutions in the form of bonded NICs are implemented, the PrivNIC or MultiPrivNIC agent can be used to provide additional protection against the failure of the aggregated link by failing over to available alternate links. These alternate links can be simple NIC interfaces or bonded NICs.
An alternative option is to configure the Oracle Clusterware interconnects over bonded NIC interfaces.
The PrivNIC and MultiPrivNIC agents are no longer supported in Oracle RAC 220.127.116.11 and later versions for managing cluster interconnects.
For 18.104.22.168 and later versions, Veritas recommends the use of alternative solutions such as bonded NIC interfaces or Oracle High Availability IP (HAIP).
Configure Oracle Cache Fusion traffic to take place through the private network. Veritas also recommends that all UDP cache-fusion links be LLT links.
For Oracle RAC 22.214.171.124 versions, the PrivNIC and MultiPrivNIC agents provide a reliable alternative when operating system limitations prevent you from using NIC bonding to provide high availability and increased bandwidth using multiple network interfaces. In the event of a NIC failure or link failure, the agent fails over the private IP address from the failed link to the connected or available LLT link. To use multiple links for database cache fusion for increased bandwidth, configure the cluster_interconnects initialization parameter with multiple IP addresses for each database instance and configure these IP addresses under MultiPrivNIC for high availability.
Oracle database clients use the public network for database services. Whenever there is a node failure or network failure, the client fails over the connection, for both existing and new connections, to the surviving node in the cluster with which it is able to connect. Client failover occurs as a result of Oracle Fast Application Notification, VIP failover and client connection TCP timeout. It is strongly recommended not to send Oracle Cache Fusion traffic through the public network.
Use NIC bonding to provide redundancy for public networks so that Oracle RAC can fail over virtual IP addresses if there is a public link failure.