Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4.2.400)
Platform: Linux
  1. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
    2.  
      Per-TB licensing model
    3.  
      TB-Per-Core licensing model
    4.  
      Per-Core licensing model
    5.  
      Notes and functional enforcements for licensing
  2. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Required operating system RPMs and patches
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Required NetBackup versions
      6.  
        Required OpenStack versions
      7.  
        Required IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  3. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3. About using LLT over the RDMA network for Veritas Access
      1.  
        RDMA over InfiniBand networks in the Veritas Access clustering environment
      2.  
        How LLT supports RDMA for faster interconnections between applications
      3.  
        Configuring LLT over RDMA for Veritas Access
      4.  
        How the Veritas Access installer configures LLT over RDMA
      5.  
        LLT over RDMA sample /etc/llttab
    4.  
      Connecting the network hardware
    5. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    6.  
      About checking the storage configuration
  4. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  5. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the RHEL operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN Tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Configuring a KMS server on the Veritas Access cluster
  6. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  7. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  8. Upgrading the operating system and Veritas Access
    1.  
      Supported upgrade paths for upgrades on RHEL
    2.  
      Upgrading the operating system and Veritas Access
  9. Migrating from scale-out and erasure-coded file systems
    1.  
      Preparing for migration
    2.  
      Migration of data
    3.  
      Migration of file systems which are exported as shares
  10. Migrating LLT over Ethernet to LLT over UDP
    1.  
      Overview of migrating LLT to UDP
    2.  
      Migrating LLT to UDP
  11. Performing a rolling upgrade
    1.  
      About rolling upgrade
    2.  
      Performing a rolling upgrade using the installer
  12. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4.2.400 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4.2.400 disc
  13. Appendix A. Installation reference
    1.  
      Installation script options
  14. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless secure shell (ssh)
    2.  
      Setting up ssh and rsh connections using the pwdutil.pl utility
  15. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access
  16.  
    Index

Migration of file systems which are exported as shares

Perform the following steps to migrate file systems which are exported as shares.

  • Check for the presence of shares for the given scale-out file system.

    storage> fs list
  • Save the share settings.

  • Stop the shares and delete the shares present in the scale-out file system.

  • Create the shares with the same settings in the destination CFS after migration is complete.

Migrating NFS shares

To migrate NFS shares

  1. Save the settings of the existing shares and use he same settings to recreate the shares on the destination file system.

    For NFS, scale-out file system uses NFS Ganesha (GNFS). You have to switch to the Kernel NFS (KNFS) server for the destination CFS.

    Use the following command to find out share information like nfs options, export dir and hostname.

    nfs> share show
  2. Unexport the share using the following command:
    nfs> share delete <export_dir>
  3. Stop the NFS server using the following command:
    nfs> server stop
  4. Recreate the NFS on the destination CFS. Start the NFS server in KNFS mode. Switch to KNFS if it is in GNFS.
    nfs> server status
    nfs> server switch
  5. Start the server.
    nfs> server start
  6. Add the share to the destination CFS using the same settings as in the source scale-out file system.
    nfs> share add <nfsoptions> <export_dir> [client]

    For example:

    nfs> share add rw,async /vx/scaleoutfs1
Migrating CIFS shares

To migrate CIFS shares

  1. Save the existing configuration.
    cifs> show
    Name 																		Value
    ---- 																		-----
    netbios name					 					vmclus1
    ntlm auth 													yes
    allow trusted domains 	no
    homedirfs
    aio size 														0
    idmap backend 									rid:10000-1000000
    workgroup 													WORKGROUP
    security 														user
    Domain
    Domain user
    Domain Controller
    Clustering Mode 							normal
    Data Migration 								no
  2. Make a note of the share name.
    cifs> share show
    ShareName    File System  Share Options
    =========== ========== 			==============================================
    mycifsshare lfs2 									owner=root,group=root,fs_mode=1777,rw,full_acl
  3. Make a note of the homedir, if any.
    cifs> homedir show
  4. Get the list of local users.
    cifs> local user show
    List of Users
    -------------
    admin
    user1
  5. Get the local group.
    cifs> local group show
    List of groups
    -------------
    nogroup
    selftest
    mygrp
  6. Create a new CIFS share on the destination CFS. Check the status of the CIFS server.
    cifs> server status
  7. Start the CIFS server.
    cifs> server start
  8. Add the file system to CIFS by specifying the name of the CFS, share name and modes.
    cifs> share add file_system sharename [@virtual_ip] 
    [cifsoptions]

    For example:

    cifs> share add mycifs myshare ro
    cifs> share show
  9. Allow the user to access the share.
    cifs> share allow sharename @group1 
    [,@group2,user1,user2,...]

    For example:

    cifs> share allow myshare user1
Migrating S3 shares

To migrate the existing buckets on the scale-out file system

  1. Get the name of the file system on which the bucket was created.
    objectaccess> bucket show
    Bucket Name FileSystem 				Pool(s) Owner
    =========== ============== ======= =====
    cloud1 					S3fs1611925982 mypool 	root
  2. Create the new CFS.
  3. Copy the data of the old file system on which the bucket was created to the new file system using the migration_tool.py script.
  4. When you migrate theS3 bucket from a scale-out file system to CFS, the mapping of existing bucket to the new file system is not possible as it already exists in the S3 database. The unconfig_s3bucket.py script removes the existing entry of the scale-out file system from the S3 database to allow the mapping of the bucket to the new CFS directory path.
    ./unconfig_s3bucket.py fs_name

    where fs_name is the name of the file system to be removed from the S3 bucket mapping.

  5. After the entry is removed from the S3 database, use the objectaccess map command to map the new CFS directory.
    objectaccess> map fs_path user_name
  6. Delete the old scale-out file system bucket, if required. This operation does not delete the source file system.

To chage the S3 configuration for future buckets if the fs_type was set to largefs

  1. Check the current settings of th eS3 server.
    objectaccess> show
  2. If the fs_type is set to largefs, set the fs_type to the desired layout.
    objectaccess> set fs_type

To recreate the bucket:

  1. Set the default pool.
    objectaccess> set pools pool_name
  2. Enable the server.
    ObjectAccess> server enable
  3. Start the server.
    ObjectAccess> server start
  4. Set the file system size, as required.
    ObjectAccess> set fs_size size
  5. Set the file system type and layout for the CFS.
    ObjectAccess> set fs_type layout
  6. Create keys for user authentication and save the access key and secret key.
    /opt/VRTSnas/scripts/utils/objectaccess/objectaccess_client.py 
    --create_key --server ADMIN_URL --username root --password 
    P@ssw0rd --insecure

    The ADMIN_URL is admin.<cluster_name>:port. The port is 8144. This url should point to the Access Appliance management console IP address.

  7. Map the bucket to the existing file system.
    Objectaccess> map fs_path	user_name