Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Migrating from scale-out and erasure-coded file systems
- Migrating LLT over Ethernet to LLT over UDP
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Migration of file systems which are exported as shares
Perform the following steps to migrate file systems which are exported as shares.
Check for the presence of shares for the given scale-out file system.
storage> fs list
Save the share settings.
Stop the shares and delete the shares present in the scale-out file system.
Create the shares with the same settings in the destination CFS after migration is complete.
To migrate NFS shares
- Save the settings of the existing shares and use he same settings to recreate the shares on the destination file system.
For NFS, scale-out file system uses NFS Ganesha (GNFS). You have to switch to the Kernel NFS (KNFS) server for the destination CFS.
Use the following command to find out share information like nfs options, export dir and hostname.
nfs> share show
- Unexport the share using the following command:
nfs> share delete <export_dir>
- Stop the NFS server using the following command:
nfs> server stop
- Recreate the NFS on the destination CFS. Start the NFS server in KNFS mode. Switch to KNFS if it is in GNFS.
nfs> server status
nfs> server switch
- Start the server.
nfs> server start
- Add the share to the destination CFS using the same settings as in the source scale-out file system.
nfs> share add <nfsoptions> <export_dir> [client]
For example:
nfs> share add rw,async /vx/scaleoutfs1
To migrate CIFS shares
- Save the existing configuration.
cifs> show Name Value ---- ----- netbios name vmclus1 ntlm auth yes allow trusted domains no homedirfs aio size 0 idmap backend rid:10000-1000000 workgroup WORKGROUP security user Domain Domain user Domain Controller Clustering Mode normal Data Migration no
- Make a note of the share name.
cifs> share show ShareName File System Share Options =========== ========== ============================================== mycifsshare lfs2 owner=root,group=root,fs_mode=1777,rw,full_acl
- Make a note of the homedir, if any.
cifs> homedir show
- Get the list of local users.
cifs> local user show List of Users ------------- admin user1
- Get the local group.
cifs> local group show List of groups ------------- nogroup selftest mygrp
- Create a new CIFS share on the destination CFS. Check the status of the CIFS server.
cifs> server status
- Start the CIFS server.
cifs> server start
- Add the file system to CIFS by specifying the name of the CFS, share name and modes.
cifs> share add file_system sharename [@virtual_ip] [cifsoptions]
For example:
cifs> share add mycifs myshare ro
cifs> share show
- Allow the user to access the share.
cifs> share allow sharename @group1 [,@group2,user1,user2,...]
For example:
cifs> share allow myshare user1
To migrate the existing buckets on the scale-out file system
- Get the name of the file system on which the bucket was created.
objectaccess> bucket show Bucket Name FileSystem Pool(s) Owner =========== ============== ======= ===== cloud1 S3fs1611925982 mypool root
- Create the new CFS.
- Copy the data of the old file system on which the bucket was created to the new file system using the
migration_tool.py
script. - When you migrate theS3 bucket from a scale-out file system to CFS, the mapping of existing bucket to the new file system is not possible as it already exists in the S3 database. The
unconfig_s3bucket.py
script removes the existing entry of the scale-out file system from the S3 database to allow the mapping of the bucket to the new CFS directory path../unconfig_s3bucket.py fs_name
where fs_name is the name of the file system to be removed from the S3 bucket mapping.
- After the entry is removed from the S3 database, use the objectaccess map command to map the new CFS directory.
objectaccess> map fs_path user_name
- Delete the old scale-out file system bucket, if required. This operation does not delete the source file system.
To chage the S3 configuration for future buckets if the fs_type was set to largefs
- Check the current settings of th eS3 server.
objectaccess> show
- If the fs_type is set to largefs, set the fs_type to the desired layout.
objectaccess> set fs_type
To recreate the bucket:
- Set the default pool.
objectaccess> set pools pool_name
- Enable the server.
ObjectAccess> server enable
- Start the server.
ObjectAccess> server start
- Set the file system size, as required.
ObjectAccess> set fs_size size
- Set the file system type and layout for the CFS.
ObjectAccess> set fs_type layout
- Create keys for user authentication and save the access key and secret key.
/opt/VRTSnas/scripts/utils/objectaccess/objectaccess_client.py --create_key --server ADMIN_URL --username root --password P@ssw0rd --insecure
The ADMIN_URL is admin.<cluster_name>:port. The port is 8144. This url should point to the Access Appliance management console IP address.
- Map the bucket to the existing file system.
Objectaccess> map fs_path user_name