Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- About configuring the Veritas Access network
- About bonding Ethernet interfaces
- Bonding Ethernet interfaces
- Configuring DNS settings
- About Ethernet interfaces
- Displaying current Ethernet interfaces and states
- Configuring IP addresses
- Configuring Veritas Access to use jumbo frames
- Configuring VLAN interfaces
- Configuring NIC devices
- Swapping network interfaces
- Excluding PCI IDs from the cluster
- About configuring routing tables
- Configuring routing tables
- Changing the firewall settings
- IP load balancing
- Configuring Veritas Access in IPv4 and IPv6 mixed mode
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- About storage provisioning and management
- About configuring disks
- About configuring storage pools
- Configuring storage pools
- About quotas for usage
- Enabling, disabling, and displaying the status of file system quotas
- Setting and displaying file system quotas
- Setting user quotas for users of specified groups
- About quotas for CIFS home directories
- About Flexible Storage Sharing
- Limitations of Flexible Storage Sharing
- Workflow for configuring and managing storage using the Veritas Access CLI
- Displaying information for all disk devices associated with the nodes in a cluster
- Displaying WWN information
- Importing new LUNs forcefully for new or existing pools
- Initiating host discovery of LUNs
- Increasing the storage capacity of a LUN
- Formatting or reinitializing a disk
- Removing a disk
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- About using the NFS server with Veritas Access
- Using the kernel-based NFS server
- Accessing the NFS server
- Displaying and resetting NFS statistics
- Configuring Veritas Access for ID mapping for NFS version 4
- Configuring the NFS client for ID mapping for NFS version 4
- About authenticating NFS clients
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About configuring Veritas Access for CIFS
- About configuring CIFS for standalone mode
- Configuring CIFS server status for standalone mode
- Changing security settings
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- Setting NTLM
- About setting trusted domains
- Specifying trusted domains that are allowed access to the CIFS server
- Allowing trusted domains access to CIFS when setting an IDMAP backend to rid
- Allowing trusted domains access to CIFS when setting an IDMAP backend to ldap
- Allowing trusted domains access to CIFS when setting an IDMAP backend to hash
- Allowing trusted domains access to CIFS when setting an IDMAP backend to ad
- About configuring Windows Active Directory as an IDMAP backend for CIFS
- Configuring the Active Directory schema with CIFS-schema extensions
- Configuring the LDAP client for authentication using the CLI
- Configuring the CIFS server with the LDAP backend
- Setting Active Directory trusted domains
- About storing account information
- Storing user and group accounts
- Reconfiguring the CIFS service
- About mapping user names for CIFS/NFS sharing
- About the mapuser commands
- Adding, removing, or displaying the mapping between CIFS and NFS users
- Automatically mapping UNIX users from LDAP to Windows users
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- Setting the CIFS aio_fork option
- About managing local users and groups
- Enabling CIFS data migration
- Configuring an FTP server
- About FTP
- Creating the FTP home directory
- Using the FTP server commands
- About FTP server options
- Customizing the FTP server options
- Administering the FTP sessions
- Uploading the FTP logs
- Administering the FTP local user accounts
- About the settings for the FTP local user accounts
- Configuring settings for the FTP local user accounts
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- About creating and maintaining file systems
- About encryption at rest
- Considerations for creating a file system
- Best practices for creating file systems
- Choosing a file system layout type
- Determining the initial extent size for a file system
- About striping file systems
- About creating a tuned file system for a specific workload
- About FastResync
- About fsck operation
- Setting retention in files
- Setting WORM over NFS
- Manually setting WORM-retention on a file over CIFS
- About managing application I/O workloads using maximum IOPS settings
- Creating a file system
- Bringing the file system online or offline
- Listing all file systems and associated information
- Modifying a file system
- Managing a file system
- Destroying a file system
- Upgrading disk layout versions
- Creating and maintaining file systems
- Section VII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- About NFS file sharing
- Displaying file systems and snapshots that can be exported
- Exporting an NFS share
- Displaying exported directories
- About managing NFS shares using netgroups
- Unexporting a directory or deleting NFS options
- Exporting an NFS share for Kerberos authentication
- Mounting an NFS share with Kerberos security from the NFS client
- Exporting an NFS snapshot
- Creating and maintaining CIFS shares
- About managing CIFS shares
- Exporting a directory as a CIFS share
- Configuring a CIFS share as secondary storage for an Enterprise Vault store
- Exporting the same file system/directory as a different CIFS share
- About the CIFS export options
- Setting share properties
- Displaying CIFS share properties
- Hiding system files when adding a CIFS normal share
- Allowing specified users and groups access to the CIFS share
- Denying specified users and groups access to the CIFS share
- Exporting a CIFS snapshot
- Deleting a CIFS share
- Modifying a CIFS share
- Making a CIFS share shadow copy aware
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section VIII. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Use cases for compressing files
- Best practices for using compression
- Compression tasks
- Compressing files
- Showing the scheduled compression job
- Scheduling compression jobs
- Listing compressed files
- Uncompressing files
- Modifying the scheduled compression
- Removing the specified schedule
- Stopping the schedule for a file system
- Removing the pattern-related rule for a file system
- Removing the modified age related rule for a file system
- Configuring episodic replication
- About Veritas Access episodic replication
- How Veritas Access episodic replication works
- Starting Veritas Access episodic replication
- Setting up communication between the source and the destination clusters
- Setting up the file systems to replicate
- Setting up files to exclude from an episodic replication unit
- Scheduling the episodic replication
- Defining what to replicate
- About the maximum number of parallel episodic replication jobs
- Managing an episodic replication job
- Replicating compressed data
- Displaying episodic replication job information and status
- Synchronizing an episodic replication job
- Behavior of the file systems on the episodic replication destination target
- Accessing file systems configured as episodic replication destinations
- Episodic replication job failover and failback
- Configuring continuous replication
- About Veritas Access continuous replication
- How Veritas Access continuous replication works
- Starting Veritas Access continuous replication
- Setting up communication between the source and the target clusters
- Setting up the file system to replicate
- Managing continuous replication
- Displaying continuous replication information and status
- Unconfiguring continuous replication
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- About instant rollbacks
- Creating a space-optimized rollback
- Creating a full-sized rollback
- Listing Veritas Access instant rollbacks
- Restoring a file system from an instant rollback
- Refreshing an instant rollback from a file system
- Bringing an instant rollback online
- Taking an instant rollback offline
- Destroying an instant rollback
- Creating a shared cache object for Veritas Access instant rollbacks
- Listing cache objects
- Destroying a cache object of a Veritas Access instant rollback
- Compressing files
- Section IX. Reference
- Index
Adding a new node to a Veritas Access cluster
This section describes the manual steps for addition of nodes to a cluster when SSH communication is disabled.
Pre-requisites
Supported operating system version is: RHEL 7.4
It is assumed that Veritas Access image is present in your local system at the
/access_build_dir/rhel7_x86_64/location.The cluster is named as clus and the cluster nodes are named as clus_01 and clus_02. Cluster names should be unique for all nodes.
Install and run Veritas Access on a single node and then add a new node to create a two-node cluster.
SSH service is stopped on all the nodes.
Assume that the public NICs are pubeth0, pubeth1, and private NICs are priveth0 and priveth1. NIC names should be consistent across all nodes. Public NIC names and private NIC names should be same across all nodes.
Use 172.16.0.3 as private IP address for clus_01 and 172.16.0.4 as private IP address for clus_02.
The new node is added to a freshly installed Veritas Access cluster.
To add a new node to a Veritas Access cluster
- Copy the Veritas Access image on the new node of the desired cluster.
- Stop the SSH daemon on all the nodes.
# systemctl stop sshd
- Verify if the following rpms are installed. If not, install the rpms from the RHEL repository.
bash-4.2.46-28.el7.x86_64 lsscsi-0.27-6.el7.x86_64 initscripts-9.49.39-1.el7.x86_64 iproute-3.10.0-87.el7.x86_64 kmod-20-15.el7.x86_64 coreutils-8.22-18.el7.x86_64 binutils-2.25.1-31.base.el7.x86_64 python-requests-2.6.0-1.el7_1.noarch python-urllib3-1.10.2-3.el7.noarch
- Install the required operating system rpms.
Create a
repofile.cat /etc/yum.repos.d/os.repo [veritas-access-os-rpms] name=Veritas Access OS RPMS baseurl=file:///access_build_dir/rhel7_x86_64/os_rpms/ enabled=1 gpgcheck=0
Run the following command:
# yum updateinfo
Run the following command:
# cd /access_build_dir/rhel7_x86_64/os_rpms/
Before running the following command, make sure that there is no RHEL subscription in the system. The
yum repolistshould point toveritas-access-os-rpmsonly.
# /usr/bin/yum -y install --setopt=protected_multilib=false perl-5.16.3-292.el7.x86_64.rpm nmap-ncat-6.40-7.el7.x86_64.rpm perl-LDAP-0.56-5.el7.noarch.rpm perl-Convert-ASN1-0.26-4.el7.noarch.rpm net-snmp-5.7.2-28.el7_4.1.x86_64.rpm net-snmp-utils-5.7.2-28.el7_4.1.x86_64.rpm openldap-2.4.44-5.el7.x86_64.rpm nss-pam-ldapd-0.8.13-8.el7.x86_64.rpm rrdtool-1.4.8-9.el7.x86_64.rpm wireshark-1.10.14-14.el7.x86_64.rpm vsftpd-3.0.2-22.el7.x86_64.rpm openssl-1.0.2k-12.el7.x86_64.rpm openssl-devel-1.0.2k-12.el7.x86_64.rpm iscsi-initiator-utils-6.2.0.874-4.el7.x86_64.rpm libpcap-1.5.3-9.el7.x86_64.rpm libtirpc-0.2.4-0.10.el7.x86_64.rpm nfs-utils-1.3.0-0.48.el7_4.2.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.el7.x86_64.rpm kernel-headers-3.10.0-693.el7.x86_64.rpm krb5-devel-1.15.1-8.el7.x86_64.rpm krb5-libs-1.15.1-8.el7.x86_64.rpm krb5-workstation-1.15.1-8.el7.x86_64.rpm perl-JSON-2.59-2.el7.noarch.rpm telnet-0.17-64.el7.x86_64.rpm apr-devel-1.4.8-3.el7_4.1.x86_64.rpm apr-util-devel-1.5.2-6.el7.x86_64.rpm glibc-common-2.17-196.el7_4.2.x86_64.rpm glibc-headers-2.17-196.el7_4.2.x86_64.rpm glibc-2.17-196.el7_4.2.x86_64.rpm glibc-2.17-196.el7_4.2.i686.rpm glibc-devel-2.17-196.el7_4.2.x86_64.rpm glibc-utils-2.17-196.el7_4.2.x86_64.rpm nscd-2.17-196.el7_4.2.x86_64.rpm sysstat-10.1.5-12.el7.x86_64.rpm libibverbs-utils-13-7.el7.x86_64.rpm libibumad-13-7.el7.x86_64.rpm opensm-3.3.19-1.el7.x86_64.rpm opensm-libs-3.3.19-1.el7.x86_64.rpm infiniband-diags-1.6.7-1.el7.x86_64.rpm sg3_utils-libs-1.37-12.el7.x86_64.rpm sg3_utils-1.37-12.el7.x86_64.rpm libyaml-0.1.4-11.el7_0.x86_64.rpm memcached-1.4.15-10.el7_3.1.x86_64.rpm python-memcached-1.59-1.noarch.rpm python-paramiko-2.1.1-4.el7.noarch.rpm python-backports-1.0-8.el7.x86_64.rpm python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm python-chardet-2.2.1-1.el7_1.noarch.rpm python-six-1.9.0-2.el7.noarch.rpm python-setuptools-0.9.8-7.el7.noarch.rpm python-ipaddress-1.0.16-2.el7.noarch.rpm targetcli-2.1.fb46-1.el7.noarch.rpm fuse-2.9.2-8.el7.x86_64.rpm fuse-devel-2.9.2-8.el7.x86_64.rpm fuse-libs-2.9.2-8.el7.x86_64.rpm PyYAML-3.10-11.el7.x86_64.rpm arptables-0.0.4-8.el7.x86_64.rpm ipvsadm-1.27-7.el7.x86_64.rpm ntpdate-4.2.6p5-25.el7_3.2.x86_64.rpm ntp-4.2.6p5-25.el7_3.2.x86_64.rpm autogen-libopts-5.18-5.el7.x86_64.rpm ethtool-4.8-1.el7.x86_64.rpm net-tools-2.0-0.22.20131004git.el7.x86_64.rpm cups-libs-1.6.3-29.el7.x86_64.rpm avahi-libs-0.6.31-17.el7.x86_64.rpm psmisc-22.20-15.el7.x86_64.rpm strace-4.12-4.el7.x86_64.rpm vim-enhanced-7.4.160-2.el7.x86_64.rpm at-3.1.13-22.el7_4.2.x86_64.rpm rsh-0.17-76.el7_1.1.x86_64.rpm unzip-6.0-16.el7.x86_64.rpm zip-3.0-11.el7.x86_64.rpm bzip2-1.0.6-13.el7.x86_64.rpm mlocate-0.26-6.el7.x86_64.rpm lshw-B.02.18-7.el7.x86_64.rpm jansson-2.10-1.el7.x86_64.rpm ypbind-1.37.1-9.el7.x86_64.rpm yp-tools-2.14-5.el7.x86_64.rpm perl-Net-Telnet-3.03-19.el7.noarch.rpm tzdata-java-2018d-1.el7.noarch.rpm perl-XML-Parser-2.41-10.el7.x86_64.rpm lsof-4.87-4.el7.x86_64.rpm cairo-1.14.8-2.el7.x86_64.rpm pango-1.40.4-1.el7.x86_64.rpm libjpeg-turbo-1.2.90-5.el7.x86_64.rpm sos-3.4-13.el7_4.noarch.rpm traceroute-2.0.22-2.el7.x86_64.rpm openldap-clients-2.4.44-5.el7.x86_64.rpm
- Install the third-party rpms:
# cd /access_build_dir/rhel7_x86_64/third_party_rpms/ # /bin/rpm -U -v --oldpackage --nodeps --replacefiles --replacepkgs ctdb-4.6.6-1.el7.x86_64.rpm perl-Template-Toolkit-2.24-5.el7.x86_64.rpm perl-Template-Extract-0.41-1.noarch.rpm perl-AppConfig-1.66-20.el7.noarch.rpm perl-File-HomeDir-1.00-4.el7.noarch.rpm samba-common-4.6.6-1.el7.x86_64.rpm samba-common-libs-4.6.6-1.el7.x86_64.rpm samba-client-4.6.6-1.el7.x86_64.rpm samba-client-libs-4.6.6-1.el7.x86_64.rpm samba-4.6.6-1.el7.x86_64.rpm samba-winbind-4.6.6-1.el7.x86_64.rpm samba-winbind-clients-4.6.6-1.el7.x86_64.rpm samba-winbind-krb5-locator-4.6.6-1.el7.x86_64.rpm libsmbclient-4.6.6-1.el7.x86_64.rpm samba-krb5-printing-4.6.6-1.el7.x86_64.rpm samba-libs-4.6.6-1.el7.x86_64.rpm libwbclient-4.6.6-1.el7.x86_64.rpm samba-winbind-modules-4.6.6-1.el7.x86_64.rpm libnet-1.1.6-7.el7.x86_64.rpm lmdb-libs-0.9.13-2.el7.x86_64.rpm python-msgpack-0.4.6-1.el7ost.x86_64.rpm python-flask-0.10.1-4.el7.noarch.rpm python-itsdangerous-0.23-2.el7.noarch.rpm libevent-libs-2.0.22-1.el7.x86_64.rpm python-werkzeug-0.9.1-2.el7.noarch.rpm python-jinja2-2.7.2-2.el7.noarch.rpm sdfs-7.4.0.0-1.x86_64.rpm psutil-4.3.0-1.x86_64.rpm python-crontab-2.2.4-1.noarch.rpm libuv-1.9.1-1.el7.x86_64.rpm
In this command, you can update the rpm version based on the rpms in the
/access_build_dir/rhel7_x86_64/third_party_rpms/directory. - Install the Veritas Access rpms.
Run the following command:
# cd /access_build_dir/rhel7_x86_64/rpms/repodata/ # cat access73.repo > /etc/yum.repos.d/access73.repo
Update the baseurl and gpgkey entry in the
/etc/yum.repos.d/access73.repofor yum repository directory.baseurl=file:///access_build_dir/rhel7_x86_64/rpms/
gpgkey=file:///access_build_dir/rhel7_x86_64/rpms/ RPM-GPG-KEY-veritas-access7
Run the following commands to refresh the yum repository.
# yum repolist
# yum grouplist
Run the following command.
# yum -y groupinstall ACCESS73
Run the following command.
# /opt/VRTS/install/bin/add_install_scripts
- Install the Veritas NetBackup client software.
# cd /access_build_dir/rhel7_x86_64 # /opt/VRTSnas/install/image_install/netbackup/install_netbackup.pl /access_build_dir/rhel7_x86_64/netbackup
- Create soft links for Veritas Access. Run the following command.
# /opt/VRTSnas/pysnas/install/install_tasks.py all_rpms_installed parallel
- License the product.
Register the permanent VLIC key.
# /opt/VRTSvlic/bin/vxlicinstupgrade -k <Key>
Verify that the VLIC key is installed properly:
# /opt/VRTSvlic/bin/vxlicrep
Register the SLIC key file:
# /opt/VRTSslic/bin/vxlicinstupgrade -k $keyfile
Verify that the SLIC key is installed properly:
# /opt/VRTSslic/bin/vxlicrep
- Take a backup of the following files:
/etc/sysconfig/network/etc/sysconfig/network-scripts/ifcfg-*/etc/resolv.conf
- Configure the private NIC:
# cd /etc/sysconfig/network-scripts/
Configure the first private NIC.
Run the following command.
# ip link set down priveth0
Update the
ifcfg-priveth0file with the following:DEVICE=priveth0 NAME=priveth0 BOOTPROTO=none TYPE=Ethernet ONBOOT=yes
Add entries in the
ifcfg-priveth0file.HWADDR=<MAC address> IPADDR= 172.16.0.3 (use IPADDR= 172.16.0.4 for second node) NETMASK=<netmask> NM_CONTROLLED=no
For example:
HWADDR=00:0c:29:0c:8d:69 IPADDR=172.16.0.3 NETMASK=255.255.248.0 NM_CONTROLLED=no
Run the following command.
# ip link set up priveth0
Configure the second private NIC.
You can configure the second private NIC in the same way. Instead of priveth0, use priveth1 for second node. You do not need to provide IPADDR for priveth1.
- Configure the public NIC.
# cd /etc/sysconfig/network-scripts/
Configure the second public NIC, pubth1 (in which the host IP is not already configured).
Run the following command:
# ip link set down pubeth1
Update the
ifcfg-pubeth1file with the following:DEVICE=pubeth1 NAME=pubeth1 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes
Add entries in the
ifcfg-pubeth1file.HWADDR=<MAC address> IPADDR=<pubeth1_pub_ip> NETMASK=<netmask> NM_CONTROLLED=no
Run the following command.
# ip link set up pubeth1
Configure the first public NIC, pubeth0.
As the first public NIC will go down, make sure that you access the system directly from its console.
Run the following command:
# ip link set down pubeth0
Update the
ifcfg-pubeth0file with the following:DEVICE=pubeth0 NAME=pubeth0 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes
Add entries in the
ifcfg-pubeth0file.HWADDR=<MAC address> IPADDR=<pubeth0_pub_ip> NETMASK=<netmask> NM_CONTROLLED=no
Run the following command.
# ip link set up pubeth0
Verify the changes.
# ip a
Run the following command.
# service network restart
SSH to the above-mentioned IP should work if you start the sshd service.
- Configure the DNS.
Update the
/etc/resolv.conffile by adding the following entries:nameserver <DNS> domain <master node name>
For example:
nameserver 10.182.128.134 domain clus_01
- Configure the gateway.
Update the
/etc/sysconfig/networkfile.GATEWAY=$gateway NOZEROCONF=yes
- Update the
configfileTemplatefile.Enter the following command:
# cd /access_build_dir/rhel7_x86_64/manual_install/network
Update the
configfileTemplatefile with the current system details:Use master as the mode for the master node and slave as the mode for the other nodes.
This template file is used by the configuration utility script to create configuration files.
Provide the same name (current host name) in old_hostname and new_hostname.
If you install Veritas Access on a single node, then that node acts as master node. Hence, you have to provide only the master node information in the template file.
- Generate the network configuration files.
The configuration utility script named
configNetworkHelper.plcreates the required configuration files.# cd /access_build_dir/rhel7_x86_64/manual_install/network # chmod +x configNetworkHelper.pl
Run the configuration utility script.
# ./configNetworkHelper.pl -f configfileTemplate
# cat /opt/VRTSnas/scripts/net/network_options.conf > /opt/VRTSnas/conf/network_options.conf
# sed -i -e '$a\' /opt/VRTSnas/conf/net_console_ip.conf
Update the
/etc/hostsfile.# echo "172.16.0.3 <master hostname>" >> /etc/hosts # echo "172.16.0.4 <slave node name>" >> /etc/hosts
For example:
# echo "172.16.0.3 clus_01" >> /etc/hosts # echo "172.16.0.4 clus_02" >> /etc/hosts
- Create the S3 configuration file.
# cat /opt/VRTSnas/conf/ssnas.yml ObjectAccess: config: {admin_port: 8144, s3_port: 8143, server_enable: 'no', ssl: 'no'} defaults: fs_blksize: '8192' fs_encrypt: 'off' fs_nmirrors: '2' fs_options: '' fs_pdirenable: 'yes' fs_protection: disk fs_sharing: 'no' fs_size: 20G fs_type: mirrored poollist: [] filesystems: {} groups: {} pools: {} - Set up the Storage Foundation cluster.
# cd /access_build_dir/rhel7_x86_64/manual_install/ network/SetupClusterScripts
# mkdir -p /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/Module/veritas/
# cp sfcfsha_ctrl.sh /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/ Module/veritas/sfcfsha_ctrl.sh
# cp module_script.pl /tmp/
# chmod +x /tmp/module_script.pl
Update the cluster name, system name, and NIC name in the following command and execute it:
# /tmp/module_script.pl veritas::sfcfsha_config '{"cluster_name" => "<Provide cluster name here>","component" => "sfcfsha","state" => "present","vcs_users" => "admin:password:Administrators,user1: passwd1:Operators", "vcs_clusterid" => 14865,"cluster_uuid" => "1391a-443ab-2b34c","method" => "ethernet","systems" => "<Provide hostnames separated by comma>", "private_link" => "<Private NIC name separated by comma>"}'For example, if the cluster name is clus and host names are clus_01 and clus_02.
# /tmp/module_script.pl veritas::sfcfsha_config '{"cluster_name" => "clus","component" => "sfcfsha","state" => "present","vcs_users" => "admin:password:Administrators,user1:passwd1:Operators", "vcs_clusterid" => 14865,"cluster_uuid" => "1391a-443ab-2b34c", "method" => "ethernet","systems" => "clus_01,clus_02", "private_link" => "priveth0,priveth1"}'Update and configure the following files:
# rpm -q --queryformat '%{VERSION}|%{BUILDTIME:date}| %{INSTALLTIME:date}|%{VERSION}\n' VRTSnas > /opt/VRTSnas/conf/version.conf# echo NORMAL > /opt/VRTSnas/conf/cluster_type
# echo 'path /opt/VRTSsnas/core/kernel/' >> /etc/kdump.conf
# sed -i '/^core_collector\b/d;' /etc/kdump.conf
# echo 'core_collector makedumpfile -c --message-level 1 -d 31' >> /etc/kdump.conf
- Start the Veritas Access product processes.
Provide the current host name in the following command and execute it.
# /tmp/module_script.pl veritas::process '{"state" => "present", "seednode" => "<provide current hostname here>","component" => "sfcfsha"}'For example, if the hostname of new node is clus_02:
# /tmp/module_script.pl veritas::process '{"state" => "present","seednode" => "clus_02","component" => "sfcfsha"}'Run the following command.
# /opt/VRTSnas/pysnas/install/install_tasks.py all_services_running serial
- Create the CVM group.
If the
/etc/vx/reconfig.d/state.d/install-dbfile exists, then execute the following command.# mv /etc/vx/reconfig.d/state.d/install-db /etc/vx/reconfig.d/state.d/install-db.a
If CVM is not configured already then run the following command on the master node.
# /opt/VRTS/bin/cfscluster config -t 200 -s
- Enable hacli.
Verify in
/etc/VRTSvcs/conf/config/main.cffile. If "HacliUserLevel = COMMANDROOT" exists, then move to step 22, else follow below steps to enable hacli in your system.# /opt/VRTS/bin/hastop -local
Update the
/etc/VRTSvcs/conf/config/main.cffile.If it does not exist, then add the following line:
HacliUserLevel = COMMANDROOT in cluster <cluster name> ( ) loop
For example:
cluster clus ( UserNames = { admin = aHIaHChEIdIIgQIcHF, user1 = aHIaHChEIdIIgFEb } Administrators = { admin } Operators = { user1 } HacliUserLevel = COMMANDROOT # /opt/VRTS/bin/hastartVerify that hacli is working.
# /opt/VRTS/bin/hacli -cmd "ls /" -sys clus_01
- Verify that the HAD daemon is running.
# /opt/VRTS/bin/hastatus -sum
- Mention that SSH is disabled.
On all the nodes, create a
communication.conffile to enable hacli instead of ssh.vim /opt/VRTSnas/conf/communication.conf { "WorkingVersion": "1", "Version": "1", "CommunicationType": "HACLI" } - Update the
/etc/llthostsfor the new node. Run the following command on the master node:# echo "1 clus_02" >> /etc/llthosts
- Restart the LLT service. Run the following command on the master node:
# service llt restart
- Verify that the system is configured correctly.
Verify that LLT is configured correctly.
# lltconfig -a list
Verify that GAB is configured properly.
# gabconfig -a
Verify the LLT state.
# lltstat -nvv
The vxconfigd daemon should be online on both nodes.
# ps -ef | grep vxconfigd
For example:
# ps -ef | grep vxconfigd root 13393 1 0 01:33 ? 00:00:00 vxconfigd -k -m disable -x syslog
- Run the join operation on the new node.
Ensure that HAD is running on all the nodes.
# /opt/VRTS/bin/hastatus
Run the following command:
# /opt/VRTSnas/install/image_install/installer -m join
- Update the groups lists with the new node.
Run the following commands on the master node.
# /opt/VRTS/bin/haconf -makerw
Update the sysname with the new node name.
max_pri = Number of nodes after adding the new node - 1
# sysname=<new node name>;max_pri=<max_pri_value>;for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do /opt/VRTS/bin/hagrp -modify $i SystemList -add $sysname $max_pri; doneFor example:
# sysname=clus_02;max_pri=1;for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do /opt/VRTS/bin/hagrp -modify $i SystemList -add $sysname $max_pri; doneNote:
If the command gives any warning for child dependency, then run the command again.
Verify that the system list is updated.
# for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do /opt/VRTS/bin/hagrp -display $i | grep -i systemList; doneEnable the groups.
# for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do /opt/VRTS/bin/hagrp -value $i Enabled; doneUpdate the AutoStartList.
# sysname=<new node name>;for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do ret=`/opt/VRTS/bin/hagrp -value $i AutoStartList`;if [ ! -z "$ret" ]; then echo "updating group: $i"; /opt/VRTS/bin/hagrp -modify $i AutoStartList -add $sysname; fi;doneUpdate the preonline system list.
# sysname=<new node name>;for i in `/opt/VRTS/bin/hagrp -list | awk '{print $1}' | sort | uniq`; do ret=`/opt/VRT/opt/VRTS/bin/S/bin/hagrp -value $i PreOnline`;if [ $ret -eq 1 ]; then echo "updating group: $i"; /opt/VRTS/bin/hagrp -modify $i PreOnline 1 -sys $sysname; fi;done# /opt/VRTS/bin/haconf -dump -makero
# /opt/VRTS/bin/hastop -all
Run the following command on all the nodes.
# /opt/VRTS/bin/hastart
- Configure CVM and CFS. Run the following commands on the master node.
# /opt/VRTS/bin/haconf -makerw
# system=<master node>;new_node_name=<new node>; cvmres=`/opt/VRTS/bin/hares -list Type=CVMCluster -localclus | awk '{print $1}' | uniq`;n=`/opt/VRTS/bin/hasys -value $new_node_name LLTNodeId`;/opt/VRTS/bin/hares -modify $cvmres CVMNodeId -add $new_node_name $nFor example:
# system=clus_01;new_node_name=clus_02;cvmres= `/opt/VRTS/bin/hares -list Type=CVMCluster -localclus | awk '{print $1}' | uniq`;n=`/opt/VRTS/bin/hasys -value $new_node_name LLTNodeId`;/opt/VRTS/bin/hares -modify $cvmres CVMNodeId -add $new_node_name $nThe command makes the following updates.
Before the command is executed.
[root@clus_01 ~]# /opt/VRTS/bin/hares -value cvm_clus CVMNodeId clus_01 0
After the command is executed.
[root@clus_01 ~]# /opt/VRTS/bin/hares -value cvm_clus CVMNodeId clus_01 0 clus_02 1
Update the ActivationMode attribute with the new node.
You have to set the ActivationMode for the newly added node only if it is set for the master node.
Set the ActivationMode of the new node to be the same as that of the master node.
Run the following command on the master node.
# master_node=<master node name>;new_node_name=<new node name>; cvmsg_name=`/opt/VRTS/bin/hares -display -attribute Group -type CVMCluster -localclus | tail -1 | awk '{print $4}'`; vxfsckd_name=`/opt/VRTS/bin/hares -list Group=$cvmsg_name Type=CFSfsckd| awk 'NR==1{print $1}'`; vxfsckd_activation=`/opt/VRTS/bin/hares -value $vxfsckd_name ActivationMode $master_node`;if [ ! -z "$vxfsckd_activation" ]; then echo "new activation mode is $vxfsckd_activation"; /opt/VRTS/bin/hares -modify $vxfsckd_name ActivationMode $vxfsckd_activation -sys $new_node_name; fi;
You can verify if the ActivationMode is set using the /opt/VRTS/bin/hares -value vxfsckd ActivationMode command.
# /opt/VRTS/bin/haconf -dump -makeroRun vxclustadm on all nodes of the cluster except the new node.
# vxclustadm -m vcs -t gab reinitIf the output of the command says that the node is not in cluster, then run the following command and then run vxclustadm again.
# vxclustadm -m vcs -t gab startnodeVerify state of the node.
# vxclustadm -v nodestate
Set the asymmetry key value for the storage_connectivity key. Run the following on the new node.
# assymetric_value=`vxtune storage_connectivity | awk 'NR==3{print $2}'`;echo $assymetric_value | grep asymmetric; if [ $? -eq 0 ]; then vxtune storage_connectivity $asymmetry_value; fi - Copy the configuration files from the console node to the new node using the
reconfig.shscript. Run the following command on the new node.# /opt/VRTSnas/scripts/cluster/reconfig.sh
- Configure the NFS group. Run the following command on the master node.
If SystemList of NFS does not include the newly added node, then update it.
Verify if the newly added node is included in the SystemList.
# /opt/VRTS/bin/hagrp -display NFS | grep SystemList
If not, then execute the following command to include the new node.
# sysname=<new node name>;max_pri=<total no of nodes including new nodes - 1>; /opt/VRTS/bin/hagrp -modify N FS SystemList -add $sysname $max_pri
Update Nproc.
# /opt/VRTS/bin/haconf -makerw # master_node=<master node name>; new_added_node= <newly added node name>; for res in `/opt/VRTS/bin/hares -list Type=NFS | awk '{print $1}' | sort -u`; do global= `/opt/VRTS/bin/hares -display $res | awk '/Nproc/ {print $3}'`;if [ "$global" != "global" ]; then nfsdcnt= `/opt/VRTS/bin/hares -value $res Nproc $master_node`;/opt/VRTS/bin/hares -modify $res Nproc $nfsdcnt -sys $new_added_node;fi;doneFor example:
master_node=clus_01; new_added_node=clus_02;for res in `/opt/VRTS/bin/hares -list Type=NFS | awk '{print $1}' | sort -u`; do global=`/opt/VRTS/bin/hares -display $res | awk '/Nproc/ {print $3}'`; if [ "$global" != "global" ]; then nfsdcnt=`/opt/VRTS/bin/hares -value $res Nproc $master_node`;/opt/VRTS/bin/hares -modify $res Nproc $nfsdcnt -sys $new_added_node;fi;doneVerify that NProc is updated correctly using:
# /opt/VRTS/bin/hares -display ssnas_nfs | grep Nproc
For example:
[root@clus_01 ~]# /opt/VRTS/bin/hares -display ssnas_nfs | grep Nproc ssnas_nfs Nproc clus_01 96 ssnas_nfs Nproc clus_02 96
Enable resource:
# /opt/VRTS/bin/hares -modify ssnas_nfs Enabled 1
# /opt/VRTS/bin/haconf -dump -makero
- Enable snas services. Run the following command on the newly added node.
# /opt/VRTSnas/scripts/misc/nas_services.sh enable
- Create disk information on the new node. Run the following command on the newly added node.
# /opt/VRTSnas/scripts/storage/create_disks_info.sh # service atd start # /usr/bin/at -f /opt/VRTSnas/scripts/report/event_notify.sh now
- Run the following command on all the nodes.
# echo "<first private nic name>" > /opt/VRTSnas/conf/net_priv_dev.conf
For example:
# echo "priveth0" > /opt/VRTSnas/conf/net_priv_dev.conf
- If you want to configure the GUI, run the following command on the new node.
# /opt/VRTSnas/pysnas/bin/isaconfig --host <new node name> --ip <new node ip|new node hostname>
You can now use the Veritas Access cluster.