Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- About configuring the Veritas Access network
- About bonding Ethernet interfaces
- Bonding Ethernet interfaces
- Configuring DNS settings
- About Ethernet interfaces
- Displaying current Ethernet interfaces and states
- Configuring IP addresses
- Configuring Veritas Access to use jumbo frames
- Configuring VLAN interfaces
- Configuring NIC devices
- Swapping network interfaces
- Excluding PCI IDs from the cluster
- About configuring routing tables
- Configuring routing tables
- Changing the firewall settings
- IP load balancing
- Configuring Veritas Access in IPv4 and IPv6 mixed mode
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- About storage provisioning and management
- About configuring disks
- About configuring storage pools
- Configuring storage pools
- About quotas for usage
- Enabling, disabling, and displaying the status of file system quotas
- Setting and displaying file system quotas
- Setting user quotas for users of specified groups
- About quotas for CIFS home directories
- About Flexible Storage Sharing
- Limitations of Flexible Storage Sharing
- Workflow for configuring and managing storage using the Veritas Access CLI
- Displaying information for all disk devices associated with the nodes in a cluster
- Displaying WWN information
- Importing new LUNs forcefully for new or existing pools
- Initiating host discovery of LUNs
- Increasing the storage capacity of a LUN
- Formatting or reinitializing a disk
- Removing a disk
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- About using the NFS server with Veritas Access
- Using the kernel-based NFS server
- Accessing the NFS server
- Displaying and resetting NFS statistics
- Configuring Veritas Access for ID mapping for NFS version 4
- Configuring the NFS client for ID mapping for NFS version 4
- About authenticating NFS clients
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About configuring Veritas Access for CIFS
- About configuring CIFS for standalone mode
- Configuring CIFS server status for standalone mode
- Changing security settings
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- Setting NTLM
- About setting trusted domains
- Specifying trusted domains that are allowed access to the CIFS server
- Allowing trusted domains access to CIFS when setting an IDMAP backend to rid
- Allowing trusted domains access to CIFS when setting an IDMAP backend to ldap
- Allowing trusted domains access to CIFS when setting an IDMAP backend to hash
- Allowing trusted domains access to CIFS when setting an IDMAP backend to ad
- About configuring Windows Active Directory as an IDMAP backend for CIFS
- Configuring the Active Directory schema with CIFS-schema extensions
- Configuring the LDAP client for authentication using the CLI
- Configuring the CIFS server with the LDAP backend
- Setting Active Directory trusted domains
- About storing account information
- Storing user and group accounts
- Reconfiguring the CIFS service
- About mapping user names for CIFS/NFS sharing
- About the mapuser commands
- Adding, removing, or displaying the mapping between CIFS and NFS users
- Automatically mapping UNIX users from LDAP to Windows users
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- Setting the CIFS aio_fork option
- About managing local users and groups
- Enabling CIFS data migration
- Configuring an FTP server
- About FTP
- Creating the FTP home directory
- Using the FTP server commands
- About FTP server options
- Customizing the FTP server options
- Administering the FTP sessions
- Uploading the FTP logs
- Administering the FTP local user accounts
- About the settings for the FTP local user accounts
- Configuring settings for the FTP local user accounts
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- About creating and maintaining file systems
- About encryption at rest
- Considerations for creating a file system
- Best practices for creating file systems
- Choosing a file system layout type
- Determining the initial extent size for a file system
- About striping file systems
- About creating a tuned file system for a specific workload
- About FastResync
- About fsck operation
- Setting retention in files
- Setting WORM over NFS
- Manually setting WORM-retention on a file over CIFS
- About managing application I/O workloads using maximum IOPS settings
- Creating a file system
- Bringing the file system online or offline
- Listing all file systems and associated information
- Modifying a file system
- Managing a file system
- Destroying a file system
- Upgrading disk layout versions
- Creating and maintaining file systems
- Section VII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- About NFS file sharing
- Displaying file systems and snapshots that can be exported
- Exporting an NFS share
- Displaying exported directories
- About managing NFS shares using netgroups
- Unexporting a directory or deleting NFS options
- Exporting an NFS share for Kerberos authentication
- Mounting an NFS share with Kerberos security from the NFS client
- Exporting an NFS snapshot
- Creating and maintaining CIFS shares
- About managing CIFS shares
- Exporting a directory as a CIFS share
- Configuring a CIFS share as secondary storage for an Enterprise Vault store
- Exporting the same file system/directory as a different CIFS share
- About the CIFS export options
- Setting share properties
- Displaying CIFS share properties
- Hiding system files when adding a CIFS normal share
- Allowing specified users and groups access to the CIFS share
- Denying specified users and groups access to the CIFS share
- Exporting a CIFS snapshot
- Deleting a CIFS share
- Modifying a CIFS share
- Making a CIFS share shadow copy aware
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section VIII. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Use cases for compressing files
- Best practices for using compression
- Compression tasks
- Compressing files
- Showing the scheduled compression job
- Scheduling compression jobs
- Listing compressed files
- Uncompressing files
- Modifying the scheduled compression
- Removing the specified schedule
- Stopping the schedule for a file system
- Removing the pattern-related rule for a file system
- Removing the modified age related rule for a file system
- Configuring episodic replication
- About Veritas Access episodic replication
- How Veritas Access episodic replication works
- Starting Veritas Access episodic replication
- Setting up communication between the source and the destination clusters
- Setting up the file systems to replicate
- Setting up files to exclude from an episodic replication unit
- Scheduling the episodic replication
- Defining what to replicate
- About the maximum number of parallel episodic replication jobs
- Managing an episodic replication job
- Replicating compressed data
- Displaying episodic replication job information and status
- Synchronizing an episodic replication job
- Behavior of the file systems on the episodic replication destination target
- Accessing file systems configured as episodic replication destinations
- Episodic replication job failover and failback
- Configuring continuous replication
- About Veritas Access continuous replication
- How Veritas Access continuous replication works
- Starting Veritas Access continuous replication
- Setting up communication between the source and the target clusters
- Setting up the file system to replicate
- Managing continuous replication
- Displaying continuous replication information and status
- Unconfiguring continuous replication
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- About instant rollbacks
- Creating a space-optimized rollback
- Creating a full-sized rollback
- Listing Veritas Access instant rollbacks
- Restoring a file system from an instant rollback
- Refreshing an instant rollback from a file system
- Bringing an instant rollback online
- Taking an instant rollback offline
- Destroying an instant rollback
- Creating a shared cache object for Veritas Access instant rollbacks
- Listing cache objects
- Destroying a cache object of a Veritas Access instant rollback
- Compressing files
- Section IX. Reference
- Index
Configuring Veritas Access with OpenStack Cinder
To show all your NFS shares
- To show all your NFS shares that are exported from Veritas Access, enter the following:
OPENSTACK> cinder share show
For example:
OPENSTACK> cinder share show /vx/fs1 *(rw,no_root_squash)
OPENSTACK> cinder share show /vx/o_fs 2001:21::/120 (rw,sync,no_root_squash)
To share and export a file system
- To share and export a file system, enter the following:
OPENSTACK> cinder share add export-dir world|client
After issuing this command, OpenStack Cinder will be able to mount the exported file system using NFS.
export-dir
Specifies the path of the directory that needs to be exported to the client.
The directory path should start with /vx and only the following characters are allowed:
'a-zAZ0- 9_/@+=.:-'
world
Specifies if the NFS export directory is intended for everyone.
client
Exports the directory with the specified options.
Clients may be specified in the following ways:
Single host
Specify a host either by an abbreviated name recognized by the resolver, the fully qualified domain name, or an IP address.
Netgroups
Netgroups may be given as @group. Only the host part of each netgroup member is considered when checking for membership.
IP networks
You can simultaneously export directories to all hosts on an IP (sub-network). This is done by specifying an IP address and netmask pair as address/netmask where the netmask can be specified as a contiguous mask length. IPv4 or IPv6 addresses can be used.
To re-export new options to an existing share, the new options will be updated after the command is run.
For example:
OPENSTACK> cinder share add /vx/fs1 world Exporting /vs/fs1 with options rw,no_root_squash
OPENSTACK> cinder share add /vx/o_fs 2001:21::/120 Exporting /vx/o_fs with options rw,sync,no_root_squash Success.
To delete the exported file system
- To delete (or unshare) the exported file system, enter the following:
OPENSTACK> cinder share delete export-dir client
For example:
OPENSTACK> cinder share delete /vx/fs1 world Removing export path *:/vx/fs1 Success.
To start or display the status of the OpenStack Cinder service
- To start the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service start
The OPENSTACK> cinder service start command needs the NFS service to be up for exporting any mount point using NFS. The OPENSTACK> cinder service start command internally starts the NFS service by running the command NFS> server start if the NFS service has not been started. There is no OPENSTACK> cinder service stop command. If you need to stop NFS mounts from being exported, use the NFS> server stop command.
For example:
OPENSTACK> cinder server start ..Success.
- To display the status of the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service status
For example:
OPENSTACK> cinder server status NFS Status on access_01 : ONLINE NFS Status on access_02 : ONLINE
To display configuration changes that need to be done on the OpenStack controller node
- To display all the configuration changes that need to be done on the OpenStack controller node, enter the following:
OPENSTACK> cinder configure export-dir
export-dir
Specifies the path of the directory that needs to be exported to the client.
The directory path should start with /vx and only the following characters are allowed:
'a-zAZ0- 9_/@+=.:-'
For example:
OPENSTACK> cinder configure /vx/fs1
To create a new volume backend named ACCESS_HDD in OpenStack Cinder
- Add the following configuration block in the
/etc/cinder/cinder.conffile on your OpenStack controller node.enabled_backends=access-1 [access-1] volume_driver=cinder.volume.drivers.veritas_cnfs.VeritasCNFSDriver volume_backend_name=ACCESS_HDD nfs_shares_config=/etc/cinder/access_share_hdd nfs_mount_point_base=/cinder/cnfs/cnfs_sata_hdd nfs_sparsed_volumes=True nfs_disk_util=df nfs_mount_options=nfsvers=3
Add the lines from the configuration block at the bottom of the file.
volume_driver
Name of the Veritas Access Cinder driver.
volume_backend_name
For this example, ACCESS_HDD is used.
This name can be different for each NFS share.
If several backends have the same name, the OpenStack Cinder scheduler decides in which backend to create the volume.
nfs_shares_config
This file has the share details in the form of
vip:/exported_dir.nfs_mount_point_base
Mount point where the share will be mounted on OpenStack Cinder.
If the directory does not exist, create it. Make sure that the Cinder user has write permission on this directory.
nfs_sparsed_volumes
Preallocate or sparse files.
nfs_disk_util
Free space calculation.
nfs_mount_options
These are the mount options OpenStack Cinder uses to NFS mount.
This same configuration information for adding to the
/etc/cinder/cinder.conffile can be obtained by running the OPENSTACK CINDER> configure export_dir command. - Append the following in the
/etc/cinder/access_share_hddfile on your OpenStack controller node:vip:/vx/fs1
Use one of the virtual IPs for vip:
192.1.1.190
192.1.1.191
192.1.1.192
192.1.1.193
192.1.1.199
You can obtain Veritas Access virtual IPs using the OPENSTACK> cinder configure export-dir option.
- Create the
/etc/cinder/access_share_hddfile at the root prompt, and update it with the NFS share details.# cnfs_sata_hdd(keystone_admin)]# cat /etc/cinder/access_share_hdd 192.1.1.190:/vx/fs1
- The Veritas Access package includes the Veritas Access OpenStack Cinder driver, which is a Python script. The OpenStack Cinder driver is located at
/opt/VRTSnas/scripts/OpenStack/veritas_cnfs.pyon the Veritas Access node. Copy theveritas_cnfs.pyfile to/usr/lib/python2.6/site-packages/cinder/volume/drivers/veritas_cnfs.pyif you are using the Python 2.6 release.If you are using the OpenStack Kilo version of RDO, the file is located at:
/usr/lib/python2.7/site-packages/cinder/volume/drivers/veritas_cnfs.py
- Make sure that the NFS mount point on the OpenStack controller node has the right permission for the cinder user. The cinder user should have write permission on the NFS mount point. Set the permission using the following command.
# setfacl -m u:cinder:rwx /cinder/cnfs/cnfs_sata_hdd
# sudo chmod -R 777 /cinder/cnfs/cnfs_sata_hdd
- Give required permissions to the
/etc/cinder/access_share_hddfile.# sudo chmod -R 777 /etc/cinder/access_share_hdd
- Restart the OpenStack Cinder driver.
# cnfs_sata_hdd(keystone_admin)]# /etc/init.d/openstack-cinder-volume restart Stopping openstack-cinder-volume: [ OK ] Starting openstack-cinder-volume: [ OK ]
Restarting the OpenStack Cinder driver picks up the latest configuration file changes.
After restarting the OpenStack Cinder driver,
/vx/fs1is NFS-mounted as per the instructions provided in the/etc/cinder/access_share_hddfile.# cnfs_sata_hdd(keystone_admin)]# mount |grep /vx/fs1 192.1.1.190:/vx/fs1 on cnfs_sata_hdd/e6c0baa5fb02d5c6f05f964423feca1f type nfs (rw,nfsvers=3,addr=10.182.98.20)
You can obtain OpenStack Cinder log files by navigating to:
/var/log/cinder/volume.log
- If you are using OpenStack RDO, use these steps to restart the OpenStack Cinder driver.
Login to the OpenStack controller node.
For example:
source /root/keystonerc_admin
Restart the services using the following command:
(keystone_admin)]# openstack-service restart openstack-cinder-volume
For more information, refer to the OpenStack Administration Guide.
- On the OpenStack controller node, create a volume type named va_vol_type.
This volume type is used to link to the volume backend.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder type-create va_vol_type +--------------------------------------+------------------+ | ID | Name | +--------------------------------------+------------------| | d854a6ad-63bd-42fa-8458-a1a4fadd04b7 | va_vol_type | +--------------------------------------+------------------+
- Link the volume type with the ACCESS_HDD back end.
[root@c1059-r720xd-111046cnfs_sata_hdd(keystone_admin)]# cinder type-key va_vol_type set volume_backend_name=ACCESS_HDD
- Create a volume of size 1gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder create --volume-type va_vol_type --display-name va_vol1 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-02-08T01:47:25.726803 | | display_description | None | | display_name | va_vol1 | | id | disk ID 1 | | metadata | {} | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | va_vol_type | +---------------------+--------------------------------------+ [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list +---------------+----------+-------------+-----+--------------+--------+------------+ | ID | Status | Display Name| Size| Volume Type |Bootable| Attached to| +---------------+----------+-------------+-----+--------------+--------+------------+ | disk ID 1 | available| va_vol1 | 1 | va_vol_type | false| | +----------------------------------------+-----+--------------+--------+------------+ - Extend the volume to 2gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder extend va_vol1 2 [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list +------------+-----------+--------------+------+--------------+---------+------------+ | ID | Status | Display Name | Size | Volume Type | Bootable| Attached to| +------------------------+--------------+------+--------------+----------------------+ | disk ID 1 | available| va_vol1 | 2 | va_vol_type | false | | +------------+-----------+--------------+------+--------------+---------+------------+
- Create a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-create --display-name va_vol1-snap va_vol1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | created_at | 2014-02-08T01:51:17.362501 | | display_description | None | | display_name | va_vol1-snap | | id | disk ID 1 | | metadata | {} | | size | 2 | | status | creating | | volume_id | 52145a91-77e5-4a68-b5e0-df66353c0591 | [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-list +-----------+--------------------------------------+-----------+----------------+------+ | ID | Volume ID | Status | Display Name | Size | +--------------------------------------------------+-----------+----------------+------+ | disk ID 1 | 52145a91-77e5-4a68-b5e0-df66353c0591| available | va_vol1-snap | 2 | +--------------------------------------------------+-----------------------------------+ - Create a volume from a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder create --snapshot-id e9dda50f-1075-407a-9cb1-3ab0697d274a --display-name va-vol2 2 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-02-08T01:57:11.558339 |