Veritas Access 7.3 Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- Important release information
- System requirements
- Linux requirements
- Operating system RPM installation requirements and operating system patching
- Kernel RPMs that are required to be installed with exact predefined RPM versions
- OL kernel RPMs that are required to be installed with exact predefined RPM versions
- Required operating system RPMs for OL 6.6
- Required operating system RPMs for OL 6.7
- Required operating system RPMs for OL 6.8
- Required operating system RPMs for RHEL 6.6
- Required operating system RPMs for RHEL 6.7
- Required operating system RPMs for RHEL 6.8
- Software requirements for installing Veritas Access in a VMware ESXi environment
- Hardware requirements for installing Veritas Access virtual machines
- Management Server Web browser support
- Supported NetBackup versions
- Supported OpenStack versions
- Supported Oracle versions and host operating systems
- Supported IP version 6 Internet standard protocol
- Linux requirements
- Network and firewall requirements
- Maximum configuration limits
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installation overview
- Summary of the installation steps
- Before you install
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About NIC bonding and NIC exclusion
- About VLAN Tagging
- Replacing an Ethernet interface card
- Configuring I/O fencing
- About configuring Veritas NetBackup
- About enabling kdump during an Veritas Access configuration
- Reconfiguring the Veritas Access cluster name and network
- Configuring a KMS server on the Veritas Access cluster
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access
- About types of Veritas Access patches
- Downloading Veritas Access 7.3 release
- Upgrading to Veritas Access 7.3 release
- Displaying the current version
- Displaying upgrade history of Veritas Access
- Downloading a Veritas Access patch release
- Displaying all Veritas Access releases that are available in the repository
- Installing Veritas Access patches
- Automatically execute your customized script before or after upgrade
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
About rolling upgrades
This release of Veritas Access supports rolling upgrades from 7.2.1.1 and later versions. Rolling upgrade is supported on RHEL 6.6, 6.7, and 6.8. A rolling upgrade minimizes service and application downtime for highly available clusters by limiting the upgrade time to the amount of time that it takes to perform a service group failover. Nodes with different product versions can be run in one cluster.
A rolling upgrade has two main phases where the installer upgrades the kernel RPMs in Phase 1 and VCS agent-related non-kernel RPMs in Phase 2.
The upgrade process divides the cluster into two subclusters, first subcluster and the second subcluster.
In Phase 1, the upgrade is performed on the second subcluster. The upgrade process stops all services and resources on the nodes of the second subcluster. All services (including the VIP groups) failover to the first subcluster. The parallel service groups on the second subcluster are taken offline.
During the failover process, the clients that are connected to the VIP groups of the second subcluster nodes are intermittently interrupted. For those clients that do not time out, the service is resumed after the VIP groups become online on one of the nodes of the first subcluster.
The installer upgrades the kernel RPMs on the second subcluster. The nodes of the first subcluster nodes continue to serve the clients.
Once Phase 1 of the rolling upgrade is complete on the second subcluster, Phase 1 of the rolling upgrade is performed on the first subcluster. The applications are failed over to the second subcluster. The parallel service groups are brought online on the second subcluster and are taken offline on the first subcluster.
After Phase 1 is complete, the nodes run with new RPMs but with the old protocol version.
During Phase 2 of the rolling upgrade, all remaining RPMs are upgraded on all the nodes of the cluster simultaneously. VCS and VCS agent packages are upgraded. The kernel drivers are upgraded to the new protocol version. Applications stay online during Phase 2. The High Availability Daemon (HAD) stops and starts again.