Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4.2.400)
Platform: Linux
  1. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
    2.  
      Per-TB licensing model
    3.  
      TB-Per-Core licensing model
    4.  
      Per-Core licensing model
    5.  
      Notes and functional enforcements for licensing
  2. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Required operating system RPMs and patches
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Required NetBackup versions
      6.  
        Required OpenStack versions
      7.  
        Required IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  3. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3. About using LLT over the RDMA network for Veritas Access
      1.  
        RDMA over InfiniBand networks in the Veritas Access clustering environment
      2.  
        How LLT supports RDMA for faster interconnections between applications
      3.  
        Configuring LLT over RDMA for Veritas Access
      4.  
        How the Veritas Access installer configures LLT over RDMA
      5.  
        LLT over RDMA sample /etc/llttab
    4.  
      Connecting the network hardware
    5. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    6.  
      About checking the storage configuration
  4. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  5. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the RHEL operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN Tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Configuring a KMS server on the Veritas Access cluster
  6. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  7. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  8. Upgrading the operating system and Veritas Access
    1.  
      Supported upgrade paths for upgrades on RHEL
    2.  
      Upgrading the operating system and Veritas Access
  9. Migrating from scale-out and erasure-coded file systems
    1.  
      Preparing for migration
    2.  
      Migration of data
    3.  
      Migration of file systems which are exported as shares
  10. Migrating LLT over Ethernet to LLT over UDP
    1.  
      Overview of migrating LLT to UDP
    2.  
      Migrating LLT to UDP
  11. Performing a rolling upgrade
    1.  
      About rolling upgrade
    2.  
      Performing a rolling upgrade using the installer
  12. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4.2.400 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4.2.400 disc
  13. Appendix A. Installation reference
    1.  
      Installation script options
  14. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless secure shell (ssh)
    2.  
      Setting up ssh and rsh connections using the pwdutil.pl utility
  15. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access
  16.  
    Index

Migrating LLT to UDP

The following section describes how to migrate LLT to UDP. For reference, this guide assumes ens161 to be the first private NIC, which already has an IP address assigned to it, and ens256 to be second private NIC, which is unconfigured initially.

Note:

The actual NIC names may vary depending on the naming scheme.

Step 1: Back up files (to be done on each node separately)

Log in to the node with root privileges and back up the following files as shown below:

# cp /etc/llttab /etc/llttab.llteth
# cp /opt/VRTSnas/conf/net_priv_dev.conf /opt/VRTSnas/conf/net_priv_dev.conf.llteth
# cp /opt/VRTSnas/conf/net_priv_nic.conf /opt/VRTSnas/conf/net_priv_nic.conf.llteth
# cp /opt/VRTSnas/conf/net_priv_ip_list.conf /opt/VRTSnas/conf/net_priv_ip_list.conf.llteth
# cp /opt/VRTSnas/nodeconf/nasinstall.conf /opt/VRTSnas/nodeconf/nasinstall.conf.llteth
Step 2: Set iptables (to be done on each node separately)
  1. List the first two private NICs from the /opt/VRTSnas/conf/net_priv_nic.conf file and add the NICs to the /opt/VRTSnas/conf/net_priv_dev.conf file.

    After adding the first two private NICs, the /opt/VRTSnas/conf/net_priv_dev.conf file includes both the private NICs as shown below:

    # cat /opt/VRTSnas/conf/net_priv_dev.conf
    ens161 ens256
    #
    

    The only NIC that was already present in /opt/VRTSnas/conf/net_priv_dev.conf will be referred as the "first nic", and the NIC that is newly added will be referred as the "second nic".

  2. Identify six ports to be used for communication over UDP.

    For example: 50000, 50001, 50002, 50003, 50004, 50005 and append them as shown below in /opt/VRTSnas/nodeconf/nasinstall.conf along with PORT_LLTLINK1 and PORT_LLTLINK2:

    PORT_LLTLINK1="ens161"
    PORT_LLTLINK2="ens256"
    LLT_UDPPORT_LIST="50000,50001,50002,50003,50004,50005"
    
  3. Create a new system service /etc/systemd/system/nas_pre_vx.service as shown below:
    # cat /etc/systemd/system/nas_pre_vx.service
    [Unit]
    Description=This service will start before veki, llt
    After=network.target network-online.target
    Requires=network-online.target network.target
    Before=vxvm-boot.service veki.service llt.service
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    TimeoutStartSec=300s
    ExecStart=/opt/VRTSnas/scripts/misc/nas_pre_vx.sh start
    ExecStop=/opt/VRTSnas/scripts/misc/nas_pre_vx.sh stop
    [Install]
    WantedBy=multi-user.target
    #
    
  4. Create the script /opt/VRTSnas/scripts/misc/nas_pre_vx.sh to be run via systemd service exactly as shown below:
    # cat /opt/VRTSnas/scripts/misc/nas_pre_vx.sh
    #!/bin/bash
    
    op=$1
    if [[ $op == "start" ]]; then
            key="I"
    elif [[ $op == "stop" ]]; then
            key="D"
    elif [[ $op == "restart" ]]; then
            /opt/VRTSnas/scripts/misc/nas_pre_vx.sh stop
            /opt/VRTSnas/scripts/misc/nas_pre_vx.sh start
            exit 0
    fi
    
    nics=`cat /opt/VRTSnas/conf/net_priv_dev.conf`
    ports=`grep LLT_UDPPORT_LIST /opt/VRTSnas/nodeconf/nasinstall.conf | 
    cut -d '=' -f2 | tr -s ',' ' ' | tr -d '"'`
    
    for nic in $nics
    do
      for port in $ports
      do
        iptables -w -${key} INPUT -i $nic -p udp -m udp --dport $port -j 
        ACCEPT 2>/dev/null
        iptables -w -${key} OUTPUT -p udp -m udp --sport $port -j 
        ACCEPT 2>/dev/null
        ip6tables -w -${key} INPUT -i $nic -p udp -m udp --dport $port -j 
        ACCEPT 2>/dev/null
        ip6tables -w -${key} OUTPUT -p udp -m udp --sport $port -j 
        ACCEPT 2>/dev/null
      done
    done
    
    exit 0
    #
    
  5. Add the line /opt/VRTSnas/scripts/misc/nas_pre_vx.sh start towards the end in the script /opt/VRTSnas/scripts/net/net_iptables.sh as shown below:
          #
          # unknown option
          #
          
          refresh
          /opt/VRTSnas/scripts/misc/nas_pre_vx.sh start
    esac
    set +x
    
    exit 0
    
  6. Mark the script as executable:
    # chmod +x /opt/VRTSnas/scripts/misc/nas_pre_vx.sh
  7. Start and enable this newly created service:
    # systemctl start nas_pre_vx.service
    # systemctl enable nas_pre_vx.service
    
Step 3: Configure the second private NIC
  1. From any one of the nodes, find the NLM IP from /opt/VRTSnas/nodeconf/nasinstall.conf. The entry should look as shown below:
    NLMMASTERIP="172.16.0.2"
    
  2. Derive the new separate private subnet with netmask 255.255.255.0 based on the NLM IP by choosing the next 255.255.255.0 based subnet. For the above IP 172.16.0.2, the new subnet will be 172.16.1.1/255.255.255.0.
  3. Create 32 entries using the first 32 IPs from this newly derived subnet (172.16.1.1 to 172.16.1.32) and add all the 32 lines to /opt/VRTSnas/conf/net_priv_ip_list.conf along with the netmask on each node. These entries will be exactly the same on each node:

    Before the changes the file is as shown below:

    172.16.0.3 255.255.255.0 node1 ens161
    172.16.0.4 255.255.255.0 node2 ens161
    172.16.0.5 255.255.255.0
    172.16.0.6 255.255.255.0
    .
    .
    172.16.0.34 255.255.255.0
    

    After the changes, the file is as shown below:

    172.16.0.3 255.255.255.0 node1 ens161
    172.16.0.4 255.255.255.0 node2 ens161
    172.16.0.5 255.255.255.0
    172.16.0.6 255.255.255.0
    .
    .
    172.16.0.34 255.255.255.0
    172.16.1.1 255.255.255.0
    172.16.1.2 255.255.255.0
    172.16.1.3 255.255.255.0
    172.16.1.4 255.255.255.0
    .
    .
    172.16.1.31 255.255.255.0
    172.16.1.32 255.255.255.0
    
  4. Select each IP address from the above list of the newly derived subnet and configure the second NIC on each node by creating/modifying its /etc/sysconfig/network-scripts/ifcfg-second nic file. After updating this file, it should look as shown below:

    On node 1:

    DEVICE=second nic
    BOOTPROTO=none
    TYPE=Ethernet
    NM_CONTROLLED=no
    HWADDR=MAC Address of second nic on node1
    ONBOOT=yes
    IPADDR=172.16.1.1
    NETMASK=255.255.255.0
    

    On node 2:

    DEVICE=second nic
    BOOTPROTO=none
    TYPE=Ethernet
    NM_CONTROLLED=no
    HWADDR=MAC Address of second nic on node2
    ONBOOT=yes
    IPADDR=172.16.1.2
    NETMASK=255.255.255.0
    
  5. Bring the newly configured second NIC online by running the following commands:
    ifdown second nic
    ifup second nic

    For example:

    # ifdown ens256
    # ifup ens256 
  6. Check if the IP address is assigned correctly to the interface by using the command:
    # ip addr show second nic

    For example, on node 1:

    # ip addr show ens256
    5: ens256: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9c:78:bc brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.1/24 brd 172.16.1.255 scope global ens256
    valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9c:78bc/64 scope link
    valid_lft forever preferred_lft forever
    #
    

    On node 2

    [root@node2 ~]# ip addr show ens256
    5: ens256: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9c:61:79 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.2/24 brd 172.16.1.255 scope global ens256
    valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9c:6179/64 scope link
    valid_lft forever preferred_lft forever
    [root@node2 ~]#
    
  7. Update /opt/VRTSnas/conf/net_priv_ip_list.conf with the correct details about the IP address, node name and NIC name as shown below:

    Before the change the file is as shown below:

    172.16.0.3 255.255.255.0 node1 ens161
    172.16.0.4 255.255.255.0 node2 ens161
    172.16.0.5 255.255.255.0
    .
    .
    172.16.0.34 255.255.255.0
    172.16.1.1 255.255.255.0
    172.16.1.2 255.255.255.0
    172.16.1.3 255.255.255.0
    .
    .
    172.16.1.32 255.255.255.0
    
    

    After changes the file is as shown below:

    172.16.0.3 255.255.255.0 node1 ens161
    172.16.0.4 255.255.255.0 node2 ens161
    172.16.0.5 255.255.255.0
    .
    .
    172.16.0.34 255.255.255.0
    172.16.1.1 255.255.255.0 node1 ens256
    172.16.1.2 255.255.255.0 node2 ens256
    172.16.1.3 255.255.255.0
    172.16.1.4 255.255.255.0
    .
    .
    172.16.1.31 255.255.255.0
    172.16.1.32 255.255.255.0
Step 4: Set up the LLT configuration file
  1. Remove the lines containing "eth" from the /etc/llttab file on each node.

    Before removing the lines, the file looks similar to as shown below:

    # cat /etc/llttab
    set-node node1
    set-cluster 14358
    link ens161 eth-00:50:56:9c:6e:c6 - ether - -
    link ens256 eth-00:50:56:9c:78:bc - ether - -
    set-flow highwater:10000
    set-flow lowwater:8000
    #
    

    After removing the lines:

    # cat /etc/llttab
    set-node node1
    set-cluster 14358
    set-flow highwater:10000
    set-flow lowwater:8000
    #
    
  2. Copy the below code to /tmp/llt.sh on each node:
    # cat /tmp/llt.sh
    #!/bin/bash
    declare -A nics; set_addr_lines=""; link_lines=""; i=0;
    IFS='='; read -ra udp_ports_str_arr <<< "$(grep LLT_UDPPORT_LIST 
    /opt/VRTSnas/nodeconf/nasinstall.conf)";
    udp_ports_str=`echo ${udp_ports_str_arr[1]} | sed 's/"//g'`;
    IFS=","; read -ra udp_ports_arr <<< "$udp_ports_str";
    IFS=' '; for nic in  $(cat /opt/VRTSnas/conf/net_priv_nic.conf); do
                   nics[$nic]=${udp_ports_arr[i]};i+=1;
    done
    IFS=$'\n'; for llthosts in $(cat /etc/llthosts);
    do
      IFS=' ';
      read -ra hosts <<< "$llthosts";
      nodeid=${hosts[0]};node=${hosts[1]};
       IFS=$'\n'; for line in $(grep $node /opt/VRTSnas/conf/net_priv_ip_list
       .conf); do
         IFS=' ';
         read -ra priv_ips <<< "$line";
         ip=${priv_ips[0]};nic=${priv_ips[3]};
           if [[ "$node" = `hostname` ]];
         then
               link_lines=$link_lines"link $nic udp - udp ${nics[$nic]} - $ip 
               -"$'\n';
         else
               set_addr_lines=$set_addr_lines"set-addr $nodeid $nic $ip"$'\n';
         fi
      done
    done
    echo $link_lines | awk NF;
    echo $set_addr_lines | awk NF;
    #
    
  3. On each node, mark this script as executable and run it. This script produces output as shown below. This output is very specific to the node on which the script was executed.

    On node 1:

    # chmod +x /tmp/llt.sh
    # /tmp/llt.sh
    link ens161 udp - udp 50000 - 172.16.0.3 -
    link ens256 udp - udp 50001 - 172.16.1.1 -
    set-addr 1 ens161 172.16.0.4
    set-addr 1 ens256 172.16.1.2
    #
    

    On node 2:

    [root@node2 ~]# chmod +x /tmp/llt.sh
    [root@node2 ~]# /tmp/llt.sh
    link ens161 udp - udp 50000 - 172.16.0.4 -
    link ens256 udp - udp 50001 - 172.16.1.2 -
    set-addr 0 ens161 172.16.0.3
    set-addr 0 ens256 172.16.1.1
    [root@node2 ~]#
    
  4. Use the output produced by the script on the same node where the script was executed and copy this output in /etc/llttab of the respective node, immediately after the "set-cluster" line. The /etc/llttab file should look as shown below after copying the output.

    On node 1:

    # cat /etc/llttab
    set-node node1
    set-cluster 14358
    link ens161 udp - udp 50000 - 172.16.0.3 -
    link ens256 udp - udp 50001 - 172.16.1.1 -
    set-addr 1 ens161 172.16.0.4
    set-addr 1 ens256 172.16.1.2
    set-flow highwater:10000
    set-flow lowwater:8000
    #
    

    On node 2:

    # cat /etc/llttab
    set-node node2
    set-cluster 14358
    link ens161 udp - udp 50000 - 172.16.0.4 -
    link ens256 udp - udp 50001 - 172.16.1.2 -
    set-addr 0 ens161 172.16.0.3
    set-addr 0 ens256 172.16.1.1
    set-flow highwater:10000
    set-flow lowwater:8000
    #
    
  5. On each node, append the below tunables to the /etc/llttab. These attributes are common and not specific to any node:
    set-udpsockets 4
    set-udpthreads 2
    set-bcasthb 0
    set-arp 0
    
Step 5: Reboot the cluster
  1. Reboot all the nodes one by one using the reboot command:
    # reboot
    [root@node2 ~]# reboot
    

    Wait for the nodes to come up. After the nodes are up and the cluster is formed, validate as shown below.

  2. Verify that the status of all the NICs is UP:
    # lltstat -nvv active
    LLT node information:
    Node State Link Status Address
    * 0 node1 OPEN
    ens161 UP 172.16.0.3
    ens256 UP 172.16.1.1
    1 node2 OPEN
    ens161 UP 172.16.0.4
    ens256 UP 172.16.1.2
    #
    
  3. Verify that the cluster should not be in jeopardy at this stage. If the cluster is not in jeopardy, it should look as shown below:
    # gabconfig -a
    GAB Port Memberships
    ===============================================================
    Port a gen 184402 membership 01
    Port b gen 184401 membership 01
    Port f gen 184410 membership 01
    Port h gen 184405 membership 01
    Port m gen 184407 membership 01
    Port u gen 18440e membership 01
    Port v gen 184409 membership 01
    Port w gen 18440b membership 01
    Port y gen 184408 membership 01
    #
    

    If the cluster is in jeopardy, it will look as shown below:

    # gabconfig -a
    GAB Port Memberships
    ===============================================================
    Port a gen 44c401 membership 01
    Port a gen 44c401 jeopardy ;1
    Port b gen 44c403 membership 01
    Port b gen 44c403 jeopardy ;1
    Port f gen 44c410 membership 01
    Port f gen 44c410 jeopardy ;1
    Port h gen 44c406 membership 01
    Port h gen 44c406 jeopardy ;1
    Port m gen 44c408 membership 01
    Port m gen 44c408 jeopardy ;1
    Port u gen 44c40e membership 01
    Port u gen 44c40e jeopardy ;1
    Port v gen 44c409 membership 01
    Port v gen 44c409 jeopardy ;1
    Port w gen 44c40c membership 01
    Port w gen 44c40c jeopardy ;1
    Port y gen 44c40a membership 01
    Port y gen 44c40a jeopardy ;1
    #