Please enter search query.
Search <book_title>...
Veritas Access 7.3 Release Notes
Last Published:
2019-04-04
Product(s):
Access (7.3)
Platform: Linux
- Overview of Veritas Access
- About this release
- Important release information
- Changes in this release
- Changes to the GUI
- Additional cloud providers
- Scale-out file system enhancements
- Installer enhancements
- Multi-protocol support for NFS with S3
- Support for kernel-based NFS version 4
- Managing application I/O workloads using maximum IOPS settings
- Veritas Access sync replication
- WORM storage for Enterprise Vault archiving
- Creation of Partition Secure Notification (PSN) file for Enterprise Vault archiving
- Changing firewall settings
- Setting retention in files
- Not supported in this release
- Technical preview features
- Fixed issues
- Software limitations
- Limitations on using shared LUNs
- Flexible Storage Sharing limitations
- Limitations related to installation and upgrade
- Limitations in the Backup mode
- Veritas Access IPv6 limitations
- FTP create_homedirs limitation
- Samba ACL performance-related issues
- Veritas Access language support
- Limitations on using InfiniBand NICs in the Veritas Access cluster
- Limitation on using Veritas Access in a virtual machine environment
- NFS-Ganesha limitations
- Kernel-based NFS v4 limitations
- File system limitation
- Veritas Access S3 server limitation
- LTR limitations
- Known issues
- Veritas Access known issues
- AWS issues
- Backup issues
- CIFS issues
- Cannot enable the quota on a file system that is appended or added to the list of homedir (3853674)
- Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system (3824576, 3836861)
- Default CIFS share has owner other than root (IA-4771)
- Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
- Deduplication issues
- Enterprise Vault Attach known issues
- FTP issues
- GUI issues
- When both volume-level and file system replication links are set up in Veritas Access 7.3, provisioning of storage using High Availability and Data Protection policies does not work (IA-7646)
- When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
- When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data (IA-7127)
- Installation and configuration issues
- After you restart a node that uses RDMA LLT, LLT does not work, or the gabconifg - a command shows the jeopardy state (IA-1796)
- Running individual Veritas Access scripts may return inconsistent return codes (3796864)
- Configuring Veritas Access with the installer fails when the SSH connection is lost (3794964)
- Excluding PCIs from the configuration fails when you configure Veritas Access using a response file (3686704)
- Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration (3659716)
- If the same driver node is used for two installations at the same time, then the second installation shows the status of progress of the first installation (IA-3446)
- If the same driver node is used for two or more installations at the same time, then the first installation session is terminated (IA-3436)
- If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception (IA-900)
- If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails (IA-1850)
- If installing using a response file is started from the cluster node, then the installation session gets terminated after the configuring NICs section (IA-3570)
- After finishing system verification checks, the installer displays a warning message about missing third-party RPMs (IA-3611)
- Installer appears to hang when you use the installaccess command to install and configure the product from a node of the cluster (IA-5300)
- Phantomgroup for the VLAN device does not come online if you create another VLAN device from CLISH after cluster configuration is done (IA-6671)
- Rolling upgrade fails to bring the cfsmount resources online when you perform an upgrade from 7.2.1 version (IA-7388)
- Rolling upgrade is not allowed if the scale-out file system is online and being used by the NFS or S3 server
- Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
- Networking issues
- CVM service group goes into faulted state unexpectedly (3793413)
- In a mixed IPv4 and IPv6 VIP network setup, the IP balancing does not consider IP type (3616561)
- The netgroup search does not continue to search in NIS if the entry is not found in LDAP (3559219)
- VIP and PIP hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet (3596284)
- NFS issues
- Slow performance with Solaris 10 clients with NFS-Ganesha version 4 (IA-1302)
- Random-write performance drop of NFS-Ganesha with Linux clients (IA-1304)
- Latest directory content of server is not visible to the client if time is not synchronized across the nodes (IA-1002)
- NFS> share show may list the shares as faulted for some time if you restart the cluster node (IA-1838)
- NFS-Ganesha shares faults after the NFS configuration is imported(IA-849)
- NFS-Ganesha shares may not come online when the number of shares are more than 500 (IA-1844)
- Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha (3816074, 3819836)
- For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time (3847271)
- NFS client application may fail with the stale file handle error on node reboot (3828442)
- NFS> share show command does not distinguish offline versus online shares (IA-2758)
- Difference in output between NFS> share show and Linux showmount commands (IA-1938)
- NFS mount on client is stalled after you switch the NFS server (IA-6629)
- Kernel NFS v4 lock failover does not happen correctly in case of a node crash (IA-5083)
- Kernel NFS v4 export mount for Netgroup does not work correctly (IA-6672)
- ObjectAccess issues
- ObjectAccess server goes in to faulted state while doing multi-part upload of a 10-GB file with a chunk size of 5 MB (IA-1943)
- When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed" (IA-5378)
- If you have upgraded to Veritas Access 7.3 from an earlier release, access to S3 server fails if the cluster name has upper case letters (IA-5628)
- If the cluster name does not follow the DNS hostname restrictions, you cannot work with ObjectAccess service in Veritas Access (IA-5631)
- ObjectAccess operations do not work correctly in virtual hosted-style addressing when SSL is enabled (IA-5737)
- ObjectAccess server enable operation fails on a single node (IA-5704)
- ObjectAccess (S3) service goes OFFLINE when the node is restarted (IA-6282)
- Bucket creation may fail with "Timed out Error" (IA-7432)
- Temporary objects may be present in the bucket in case of multi-part upload (IA-7434)
- Bucket CreationDate is incorrect if the bucket is created by mapping the filesystem path (IA-7227)
- Group configuration does not work in ObjectAccess if the group name contains a space (IA-7407)
- An erasure coded file system may show mirrored layout in the Storage> fs list command (IA-7266)
- Accessing a bucket or object in the S3 server fails with S3 internal errors
- OpenDedup issues
- OpenStack issues
- Replication issues
- Running replication and dedup over the same source, the replication file system fails in certain scenarios (3804751)
- The System> config import command does not import replication keys and jobs (3822515)
- The job uses the schedule on the target after replication failover (3668957)
- Replication fails with error "connection reset by peer" if the target node fails over (IA-3290)
- Synchronous replication shows file system layout as mirrored in case of simple and striped file system (IA-7308)
- Synchronous replication is unable to come in replicating state if the Storage Replicated Log becomes full
- If you restart any node in the primary or secondary cluster, replication may go into PAUSED state (IA-7567)
- Sync replication failback does not work (IA-7524)
- Replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after upgrade to 7.3 version (IA-7597)
- Setting the bandwidth through the GUI is not enabled for replication (IA- 7295)
- Sync replication fails when the 'had' daemon is restarted on the target manually (IA-7357)
- SmartIO issues
- Storage issues
- Snapshot mount can fail if the snapshot quota is set (IA-1542)
- Sometimes the Storage> pool rmdisk command does not print a message (IA-1733)
- The Storage> Pool rmdisk command sometimes can give an error where the file system name is not printed (IA-1639)
- Not able to enable quota for file system that is newly added in the list of CIFS home directories (IA-1851)
- Destroying the file system may not remove the /etc/mtab entry for the mount point (3801216)
- The Storage> fs online command returns an error, but the file system is online after several minutes (3650635)
- Removing disks from the pool fails if a DCO exists (3452098)
- Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system (IA-3545)
- Rollback refresh fails when running it after running Storage> fs growby or growto commands (3588248)
- If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list (IA-3269)
- Inconsistent cluster state with management service down when disabling I/O fencing (IA-3427)
- Storage> tier move command failover of node is not working (IA-3091)
- Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state (IA-3251)
- Event messages are not generated when cache objects get full (IA-3239)
- Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon (IA-2915)
- Storage> tier move list command fails if one of the cluster nodes is rebooted (IA-3241)
- When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status (IA-3398)
- Storage> fs addcolumn operation fails but error notification is not sent (IA-5434)
- Unable to create space-optimized rollback when tiering is present (IA-5690)
- Enabling fencing on a setup with volume manager objects present fails to import the disk group (IA-7219)
- For the rollback cache growto and growby operations, the cache size values cannot be specified in terms of g/G, m/M or k/K (IA-7473)
- File system creation fails when the pool contains only one disk (IA-7515)
- After starting the backup service, BackupGrp goes into FAULTED state on some nodes (IA-7174)
- A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command (IA-7604)
- A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command (IA-7628)
- Veritas Access known issues
- Getting help
Slow performance with Solaris 10 clients with NFS-Ganesha version 4 (IA-1302)
For the NFS-Ganesha server directory operations mkdir, rmdir, and open, the operations are slow when performed from the Solaris clients.
Workaround:
For performance-critical workloads using the Solaris platform, use the kernel-based NFS version 3 server.