Veritas Access 7.3.0.1 Release Notes
- Overview of Veritas Access
 - Fixed issues
 - Software limitations
- Limitations on using shared LUNs
 - Flexible Storage Sharing limitations
 - Limitations related to installation and upgrade
 - Limitations in the Backup mode
 - Veritas Access IPv6 limitations
 - FTP limitations
 - Samba ACL performance-related issues
 - Veritas Access language support
 - Limitations on using InfiniBand NICs in the Veritas Access cluster
 - Limitation on using Veritas Access in a virtual machine environment
 - NFS-Ganesha limitations
 - Kernel-based NFS v4 limitations
 - File system limitation
 - Veritas Access S3 server limitation
 - LTR limitations
 - Limitation related to replication authentication
 
 - Known issues
- Veritas Access known issues
- AWS issues
 - Backup issues
 - CIFS issues
- Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
 - Cannot enable the quota on a file system that is appended or added to the list of homedir
 - Default CIFS share has owner other than root
 - Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
 - Modifying or adding a CIFS share with non-default fs_mode/owner/user options results in these settings getting applied to all the existing shares on the same path
 - CIFS> mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
 
 - Deduplication issues
 - Enterprise Vault Attach known issues
 - FTP issues
- If a file system is used as homedir or anonymous_login_dir for FTP, this file system cannot be destroyed
 - The FTP> server start command reports the FTP server to be online even when it is not online
 - The FTP> session showdetails user=<AD username> command does not work
 - If there are more than 1000 FTP sessions, the FTP> session showdetails command takes a very long time to respond or hangs
 
 - GUI issues
- When both volume-level and file system replication links are set up in Veritas Access 7.3, provisioning of storage using High Availability and Data Protection policies does not work
 - When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
 
 - Installation and configuration issues
- Running individual Veritas Access scripts may return inconsistent return codes
 - Excluding PCIs from the configuration fails when you configure Veritas Access using a response file
 - Configuring Veritas Access with the installer fails when the SSH connection is lost
 - Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
 - After you restart a node that uses RDMA LLT, LLT does not work, or the gabconifg - a command shows the jeopardy state
 - If the same driver node is used for two installations at the same time, then the second installation shows the status of progress of the first installation
 - If the same driver node is used for two or more installations at the same time, then the first installation session is terminated
 - If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
 - If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
 - Installer appears to hang when you use the installaccess command to install and configure the product from a node of the cluster
 - Phantomgroup for the VLAN device does not come online if you create another VLAN device from CLISH after cluster configuration is done
 - During installation, detection of network interfaces may fail in some cases
 - After the installation and configuration of Veritas Access 7.3.0.1, one of the nodes fails to restart
 - The nslcd.service does not restart after failure
 
 - Networking issues
- In a mixed IPv4 and IPv6 VIP network setup, the IP balancing does not consider IP type
 - The netgroup search does not continue to search in NIS if the entry is not found in LDAP
 - VIP and PIP hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
 - CVM service group goes into faulted state unexpectedly
 - After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
 
 - NFS issues
- For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
 - Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
 - NFS client application may fail with the stale file handle error on node reboot
 - Slow performance with Solaris 10 clients with NFS-Ganesha version 4
 - Random-write performance drop of NFS-Ganesha with Linux clients
 - Latest directory content of server is not visible to the client if time is not synchronized across the nodes
 - NFS-Ganesha shares faults after the NFS configuration is imported
 - NFS> share show may list the shares as faulted for some time if you restart the cluster node
 - NFS-Ganesha shares may not come online when the number of shares are more than 500
 - NFS> share show command does not distinguish offline versus online shares
 - Difference in output between NFS> share show and Linux showmount commands
 - NFS mount on client is stalled after you switch the NFS server
 - Kernel NFS v4 lock failover does not happen correctly in case of a node crash
 - Kernel NFS v4 export mount for Netgroup does not work correctly
 
 - ObjectAccess issues
- ObjectAccess server goes in to faulted state while doing multi-part upload of a 10-GB file with a chunk size of 5 MB
 - When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
 - If the cluster name does not follow the DNS hostname restrictions, you cannot work with ObjectAccess service in Veritas Access
 - An erasure coded file system may report the file system as full even if free space available in the file system
 - ObjectAccess operations do not work correctly in virtual hosted-style addressing when SSL is enabled
 - ObjectAccess server enable operation fails on a single node
 - ObjectAccess (S3) service goes OFFLINE when the node is restarted
 - Bucket creation may fail with "Timed out Error"
 - Bucket deletion may fail with "No such bucket" or "No such key" error
 - Temporary objects may be present in the bucket in case of multi-part upload
 - Bucket CreationDate is incorrect if the bucket is created by mapping the filesystem path
 - Group configuration does not work in ObjectAccess if the group name contains a space
 - An erasure coded file system may show mirrored layout in the Storage> fs list command
 - Accessing a bucket or object in the S3 server fails with S3 internal errors
 
 - OpenDedup issues
 - OpenStack issues
 - Replication issues
- Replication job with encryption fails after job remove and add link with SSL certificate error
 - Running replication and dedup over the same source, the replication file system fails in certain scenarios
 - The System> config import command does not import replication keys and jobs
 - Replication job status shows the entry for a link that was removed
 - The job uses the schedule on the target after replication failover
 - Replication fails with error "connection reset by peer" if the target node fails over
 - Replication job modification fails
 - Replication failover does not work
 - Synchronous replication shows file system layout as mirrored in case of simple and striped file system
 - Synchronous replication is unable to come in replicating state if the Storage Replicated Log becomes full
 - If you restart any node in the primary or secondary cluster, replication may go into PAUSED state
 - Sync replication failback does not work
 - Replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after upgrade to 7.3 version
 - Setting the bandwidth through the GUI is not enabled for replication
 - Sync replication fails when the 'had' daemon is restarted on the target manually
 
 - SmartIO issues
 - Storage issues
- Destroying the file system may not remove the /etc/mtab entry for the mount point
 - The Storage> fs online command returns an error, but the file system is online after several minutes
 - Removing disks from the pool fails if a DCO exists
 - Snapshot mount can fail if the snapshot quota is set
 - Sometimes the Storage> pool rmdisk command does not print a message
 - The Storage> Pool rmdisk command sometimes can give an error where the file system name is not printed
 - Not able to enable quota for file system that is newly added in the list of CIFS home directories
 - Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
 - Rollback refresh fails when running it after running Storage> fs growby or growto commands
 - If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
 - Inconsistent cluster state with management service down when disabling I/O fencing
 - Storage> tier move command failover of node is not working
 - Storage> scanbus operation hangs at the time of I/O fencing operation
 - Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
 - Event messages are not generated when cache objects get full
 - Veritas Access CLISH interface should not allow uncompress and compress operations to run on the same file at the same time
 - Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
 - Storage> tier move list command fails if one of the cluster nodes is rebooted
 - Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
 - When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
 - Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
 - Storage> fs addcolumn operation fails but error notification is not sent
 - Unable to create space-optimized rollback when tiering is present
 - Enabling fencing on a setup with volume manager objects present fails to import the disk group
 - For the rollback cache growto and growby operations, the cache size values cannot be specified in terms of g/G, m/M or k/K
 - File system creation fails when the pool contains only one disk
 - After starting the backup service, BackupGrp goes into FAULTED state on some nodes
 - A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
 - A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
 - File system creation fails with SSD pool
 - Storage> dedup commands are not supported on RHEL 7.x
 - A scale-out file system may be in offline state after the execution of Storage> fencing off/on command
 
 - System issues
 
 
 - Veritas Access known issues
 - Getting help
 
Enabling fencing on a setup with volume manager objects present fails to import the disk group
If you enable fencing on a setup with volume manager objects present, it fails to import the disk group and you get the following error message:
Disk <diskname> does not support SCSI-3 PR, Skipping PGR operations for this disk
If there are volume manager objects like volumes, and volume sets, and you enable fencing, then the shared disk group is not imported as a part of the cluster join.
Even manual import of the disk group using the vxdg -s import <dgname> command fails with the following error message:
SCSI-3 PR operation failed
This issue is due to the export flag that is missing on the disk which has been implicitly exported using the disk map command. This happens if the disk group contains disks that do not support SCSI3 PR.
Workaround:
Explicitly export all the DAS disks from all the nodes of the cluster using the following commands before you enable majority-based fencing.
# vxdisk -f export <DAS disk Name>
You can now enable fencing.