Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- Important release information
- System requirements
- Linux requirements
- Software requirements for installing Veritas Access in a VMware ESXi environment
- Hardware requirements for installing Veritas Access virtual machines
- Management Server Web browser support
- Required NetBackup versions
- Required OpenStack versions
- Required Oracle versions and host operating systems
- Required IP version 6 Internet standard protocol
- Network and firewall requirements
- Maximum configuration limits
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installation overview
- Summary of the installation steps
- Before you install
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Replacing an Ethernet interface card
- Configuring I/O fencing
- About configuring Veritas NetBackup
- About enabling kdump during an Veritas Access configuration
- Reconfiguring the Veritas Access cluster name and network
- Configuring a KMS server on the Veritas Access cluster
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
RDMA over InfiniBand networks in the Veritas Access clustering environment
Veritas Access uses Low Latency Transport (LLT) for data transfer between applications on nodes. LLT functions as a high-performance, low-latency replacement for the IP stack, and is used for all cluster communications. It distributes (load balances) internode communication across all available private network links. This distribution means that all cluster communications are evenly distributed across all private network links (maximum eight) for performance and fault resilience. If a link fails, traffic is redirected to the remaining links. LLT is also responsible for sending and receiving heartbeat traffic over network links. Using LLT data transfer over an RDMA network boosts performance of both file system data transfer and I/O transfer between nodes.
Network interface cards (NICs) and network switches that support RDMA are required to enable the faster application data transfer between nodes. You also need to configure the operating system and LLT for RDMA.