InfoScale™ 9.0 Storage Foundation for Oracle® RAC Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 7.4.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading SF Oracle RAC using YUM
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Section IV. Installation of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Appendix J. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix K. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
About support for InfoScale upgrade using YUM
InfoScale (9.0 onwards) provides an additional method to upgrade the product using the Yellowdog Updater Modified (YUM) tool. This method does not require the use of the InfoScale Common Product Installer (CPI). The YUM upgrade method is designed to accommodate minor OS version upgrades as well as application upgrades.
This method employs a single-node reboot to finalize the upgrade process, thus eliminating application downtime and the need to evacuate the Cluster Server (VCS) resource, if applicable.
Consider the following requirements and limitations before you use YUM to upgrade InfoScale:
The YUM upgrade method is available on the RHEL platform only.
With InfoScale 9.0, the method supports upgrades from InfoScale 8.x to 9.x only.
With InfoScale 9.0.2, the method supports upgrades from InfoScale 7.4.x to 9.x. Specifically, you can upgrade from the 7.4.1.3100, 7.4.1.3300 (additional patch for RHEL 8.7), and 7.4.2.5600 patches through 9.x.
The method supports rolling upgrades or full upgrades.
The method does not support rollback operations like yum history rollback and yum history undo.
The method is compatible with YUM and DNF commands, both. Dandified YUM (DNF) is a successor to YUM and employs a similar command structure.
In the pre-reboot phase, where you have run the yum update command but have not yet rebooted the node, InfoScale continues to work as the previous version. The features and functionality of the upgraded InfoScale version are not available until the node has rebooted successfully.
The pre-reboot phase may also enforce other restrictions. For example, in InfoScale storage environments, you cannot update the VxVM tunables.
During the pre-reboot phase, if the following services are not running before you run the yum update command, make sure that you do not start them before the node reboots:
veki
vxfs
vxglmservice
vxgmsservice
vxodm
During the execution of the yum update command, any scheduled Secure File System (SecureFS) jobs and File Replicator (VFR) jobs are temporarily skipped. After the command completes its execution successfully, the jobs resume and run as per the configured schedule.