Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4.2.400)
Platform: Linux
  1. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
    2.  
      Per-TB licensing model
    3.  
      TB-Per-Core licensing model
    4.  
      Per-Core licensing model
    5.  
      Notes and functional enforcements for licensing
  2. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Required operating system RPMs and patches
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Required NetBackup versions
      6.  
        Required OpenStack versions
      7.  
        Required IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  3. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3. About using LLT over the RDMA network for Veritas Access
      1.  
        RDMA over InfiniBand networks in the Veritas Access clustering environment
      2.  
        How LLT supports RDMA for faster interconnections between applications
      3.  
        Configuring LLT over RDMA for Veritas Access
      4.  
        How the Veritas Access installer configures LLT over RDMA
      5.  
        LLT over RDMA sample /etc/llttab
    4.  
      Connecting the network hardware
    5. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    6.  
      About checking the storage configuration
  4. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  5. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the RHEL operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN Tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Configuring a KMS server on the Veritas Access cluster
  6. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  7. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  8. Upgrading the operating system and Veritas Access
    1.  
      Supported upgrade paths for upgrades on RHEL
    2.  
      Upgrading the operating system and Veritas Access
  9. Migrating from scale-out and erasure-coded file systems
    1.  
      Preparing for migration
    2.  
      Migration of data
    3.  
      Migration of file systems which are exported as shares
  10. Migrating LLT over Ethernet to LLT over UDP
    1.  
      Overview of migrating LLT to UDP
    2.  
      Migrating LLT to UDP
  11. Performing a rolling upgrade
    1.  
      About rolling upgrade
    2.  
      Performing a rolling upgrade using the installer
  12. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4.2.400 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4.2.400 disc
  13. Appendix A. Installation reference
    1.  
      Installation script options
  14. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless secure shell (ssh)
    2.  
      Setting up ssh and rsh connections using the pwdutil.pl utility
  15. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access
  16.  
    Index

Deleting a node from the cluster

You can delete a node from the cluster. Use the node name that is displayed in the Cluster> show command.

If the deleted node was in the RUNNING state prior to deletion, after you reboot the node, that node is assigned to the original IP address that can be used to add the node back to the cluster. The original IP address of the node is the IP address that the node used before it was added into the cluster.

If your cluster has an FSS pool configured, the delete node operation may result in permanent loss of data for file systems that have simple of striped layouts for which there are no backup copies. For such file systems, it is required to back up or evacuate the data first before deleting the node.

If your cluster has configured an FSS pool, you cannot use the installer to delete nodes that would result in a single node in the node group of the FSS pool.

Deleting a node from a two-node cluster that has writeback caching enabled changes the caching to read-only. Writeback caching is only supported for two nodes.

The IP address that was used by the node before it was deleted from the cluster is still accessible until you perform a restart operation.

After the node is deleted from the cluster, when you perform a reboot operation, the old IP configuration is restored. Therefore, make sure to remove the used IPs from Veritas Access for the deleted node or vice versa.

To delete a node from the cluster

  1. To show the current state of all nodes in the cluster, enter the following:
    Cluster> show
  2. To delete a node from a cluster, enter the following:
    Cluster> del nodename

    where nodename is the node name that appeared in the listing from the Cluster> show command. You cannot specify a node by its IP address.

    Note:

    This command is not supported in a single-node cluster.

    For example:

    Cluster> del snas-01
    				
  3. After a node is deleted from the cluster, the physical IP addresses that it used are marked as unused physical IP addresses. The IP addresses are available for use if you add new nodes. The virtual IP addresses used by a node which has been deleted are not removed. Deleting a node moves the virtual IP addresses on the deleted node to the remaining nodes in the cluster.

    For example:

    Network> ip addr show
    IP            Netmask/Prefix  Device     Node            Type     Status
    --            --------------  ------     ----            ----     ------
    192.168.30.10 255.255.252.0   pubeth0    source-30a-01   Physical
    192.168.30.11 255.255.252.0   pubeth1    source-30a-01   Physical
    192.168.30.12 255.255.252.0              ( unused )      Physical
    192.168.30.13 255.255.252.0              ( unused )      Physical
    192.168.30.14 255.255.252.0   pubeth0    source-30a-01   Virtual  ONLINE (Con IP)
    192.168.30.15 255.255.252.0   pubeth0    source-30a-01   Virtual  ONLINE
    192.168.30.16 255.255.252.0   pubeth0    source-30a-01   Virtual  ONLINE
    192.168.30.17 255.255.252.0   pubeth1    source_30a-01   Virtual  ONLINE
    192.168.30.18 255.255.252.0   pubeth1    source-30a-01   Virtual  ONLINE
    

    If the physical or virtual IP addresses are not going to be used, they can be removed using the following command:

    Network> ip addr del ipaddr

    For example:

    Network> ip addr del 192.168.30.18
    ACCESS ip addr SUCCESS V-288-1031 ip addr del successful.

Note:

If the cluster has configured NIC bonding, you also need to delete the configuration of the deleted node on the switch.