Veritas Access 7.3.0.1 Release Notes

Last Published:
Product(s): Access (7.3.0.1)
  1. Overview of Veritas Access
    1.  
      About this release
    2.  
      Important release information
    3. Changes in this release
      1.  
        Support for creating CIFS shares for a scale-out file system
      2.  
        File sharing for a scale-out file system using FTP
      3.  
        WORM support over NFS
      4.  
        Support for RHEL 7.3
      5.  
        GUI enhancements
    4.  
      Not supported in this release
    5. Technical preview features
      1.  
        IP load balancing
      2.  
        Erasure coding for Object Store buckets
      3.  
        Veritas Access Streamer as a storage type for Enterprise Vault
  2. Fixed issues
    1.  
      Fixed issues since the last release
  3. Software limitations
    1.  
      Limitations on using shared LUNs
    2. Flexible Storage Sharing limitations
      1.  
        If your cluster has DAS disks, you must limit the cluster name to ten characters at installation time
    3. Limitations related to installation and upgrade
      1.  
        If required VIPs are not configured, then services like NFS, CIFS, and S3 do not function properly
    4.  
      Limitations in the Backup mode
    5.  
      Veritas Access IPv6 limitations
    6.  
      FTP limitations
    7.  
      Samba ACL performance-related issues
    8. Veritas Access language support
      1.  
        Veritas Access does not support non-English characters when using the CLISH (3595280)
    9.  
      Limitations on using InfiniBand NICs in the Veritas Access cluster
    10.  
      Limitation on using Veritas Access in a virtual machine environment
    11.  
      NFS-Ganesha limitations
    12.  
      Kernel-based NFS v4 limitations
    13. File system limitation
      1.  
        Any direct NLM operations from CLISH can lead to system instability (IA-1640)
    14.  
      Veritas Access S3 server limitation
    15.  
      LTR limitations
    16.  
      Limitation related to replication authentication
  4. Known issues
    1. Veritas Access known issues
      1. AWS issues
        1.  
          The CLISH storage commands appear to hang when EBS disks are forcibly detached from the AWS console
        2.  
          CIFS server start command fails on one of the nodes if the clustering mode is set to CTDB
      2. Backup issues
        1.  
          Backup or restore status may show invalid status after the BackupGrp is switched or failed over to the other node when the SAN client is enabled
      3. CIFS issues
        1.  
          Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
        2.  
          Cannot enable the quota on a file system that is appended or added to the list of homedir
        3.  
          Default CIFS share has owner other than root
        4.  
          Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
        5.  
          Modifying or adding a CIFS share with non-default fs_mode/owner/user options results in these settings getting applied to all the existing shares on the same path
        6.  
          CIFS> mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
      4. Deduplication issues
        1.  
          Removing lost+found files for a mount point that has deduplication enabled may cause issues with deduplication
      5. Enterprise Vault Attach known issues
        1.  
          Error while setting full access permission to Enterprise Vault user for archival directory
      6. FTP issues
        1.  
          If a file system is used as homedir or anonymous_login_dir for FTP, this file system cannot be destroyed
        2.  
          The FTP> server start command reports the FTP server to be online even when it is not online
        3.  
          The FTP> session showdetails user=<AD username> command does not work
        4.  
          If there are more than 1000 FTP sessions, the FTP> session showdetails command takes a very long time to respond or hangs
      7. GUI issues
        1.  
          When both volume-level and file system replication links are set up in Veritas Access 7.3, provisioning of storage using High Availability and Data Protection policies does not work
        2.  
          When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
      8. Installation and configuration issues
        1.  
          Running individual Veritas Access scripts may return inconsistent return codes
        2.  
          Excluding PCIs from the configuration fails when you configure Veritas Access using a response file
        3.  
          Configuring Veritas Access with the installer fails when the SSH connection is lost
        4.  
          Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
        5.  
          After you restart a node that uses RDMA LLT, LLT does not work, or the gabconifg - a command shows the jeopardy state
        6.  
          If the same driver node is used for two installations at the same time, then the second installation shows the status of progress of the first installation
        7.  
          If the same driver node is used for two or more installations at the same time, then the first installation session is terminated
        8.  
          If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
        9.  
          If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
        10.  
          Installer appears to hang when you use the installaccess command to install and configure the product from a node of the cluster
        11.  
          Phantomgroup for the VLAN device does not come online if you create another VLAN device from CLISH after cluster configuration is done
        12.  
          During installation, detection of network interfaces may fail in some cases
        13.  
          After the installation and configuration of Veritas Access 7.3.0.1, one of the nodes fails to restart
        14.  
          The nslcd.service does not restart after failure
      9. Networking issues
        1.  
          In a mixed IPv4 and IPv6 VIP network setup, the IP balancing does not consider IP type
        2.  
          The netgroup search does not continue to search in NIS if the entry is not found in LDAP
        3.  
          VIP and PIP hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
        4.  
          CVM service group goes into faulted state unexpectedly
        5.  
          After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
      10. NFS issues
        1.  
          For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
        2.  
          Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
        3.  
          NFS client application may fail with the stale file handle error on node reboot
        4.  
          Slow performance with Solaris 10 clients with NFS-Ganesha version 4
        5.  
          Random-write performance drop of NFS-Ganesha with Linux clients
        6.  
          Latest directory content of server is not visible to the client if time is not synchronized across the nodes
        7.  
          NFS-Ganesha shares faults after the NFS configuration is imported
        8.  
          NFS> share show may list the shares as faulted for some time if you restart the cluster node
        9.  
          NFS-Ganesha shares may not come online when the number of shares are more than 500
        10.  
          NFS> share show command does not distinguish offline versus online shares
        11.  
          Difference in output between NFS> share show and Linux showmount commands
        12.  
          NFS mount on client is stalled after you switch the NFS server
        13.  
          Kernel NFS v4 lock failover does not happen correctly in case of a node crash
        14.  
          Kernel NFS v4 export mount for Netgroup does not work correctly
      11. ObjectAccess issues
        1.  
          ObjectAccess server goes in to faulted state while doing multi-part upload of a 10-GB file with a chunk size of 5 MB
        2.  
          When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
        3.  
          If the cluster name does not follow the DNS hostname restrictions, you cannot work with ObjectAccess service in Veritas Access
        4.  
          An erasure coded file system may report the file system as full even if free space available in the file system
        5.  
          ObjectAccess operations do not work correctly in virtual hosted-style addressing when SSL is enabled
        6.  
          ObjectAccess server enable operation fails on a single node
        7.  
          ObjectAccess (S3) service goes OFFLINE when the node is restarted
        8.  
          Bucket creation may fail with "Timed out Error"
        9.  
          Bucket deletion may fail with "No such bucket" or "No such key" error
        10.  
          Temporary objects may be present in the bucket in case of multi-part upload
        11.  
          Bucket CreationDate is incorrect if the bucket is created by mapping the filesystem path
        12.  
          Group configuration does not work in ObjectAccess if the group name contains a space
        13.  
          An erasure coded file system may show mirrored layout in the Storage> fs list command
        14.  
          Accessing a bucket or object in the S3 server fails with S3 internal errors
      12. OpenDedup issues
        1.  
          OpenDedup is not highly available
      13. OpenStack issues
        1.  
          Cinder and Manila shares cannot be distinguished from the CLISH
      14. Replication issues
        1.  
          Replication job with encryption fails after job remove and add link with SSL certificate error
        2.  
          Running replication and dedup over the same source, the replication file system fails in certain scenarios
        3.  
          The System> config import command does not import replication keys and jobs
        4.  
          Replication job status shows the entry for a link that was removed
        5.  
          The job uses the schedule on the target after replication failover
        6.  
          Replication fails with error "connection reset by peer" if the target node fails over
        7.  
          Replication job modification fails
        8.  
          Replication failover does not work
        9.  
          Synchronous replication shows file system layout as mirrored in case of simple and striped file system
        10.  
          Synchronous replication is unable to come in replicating state if the Storage Replicated Log becomes full
        11.  
          If you restart any node in the primary or secondary cluster, replication may go into PAUSED state
        12.  
          Sync replication failback does not work
        13.  
          Replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after upgrade to 7.3 version
        14.  
          Setting the bandwidth through the GUI is not enabled for replication
        15.  
          Sync replication fails when the 'had' daemon is restarted on the target manually
      15. SmartIO issues
        1.  
          SmartIO writeback cachemode for a file system changes to read mode after taking the file system offline and then online
      16. Storage issues
        1.  
          Destroying the file system may not remove the /etc/mtab entry for the mount point
        2.  
          The Storage> fs online command returns an error, but the file system is online after several minutes
        3.  
          Removing disks from the pool fails if a DCO exists
        4.  
          Snapshot mount can fail if the snapshot quota is set
        5.  
          Sometimes the Storage> pool rmdisk command does not print a message
        6.  
          The Storage> Pool rmdisk command sometimes can give an error where the file system name is not printed
        7.  
          Not able to enable quota for file system that is newly added in the list of CIFS home directories
        8.  
          Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
        9.  
          Rollback refresh fails when running it after running Storage> fs growby or growto commands
        10.  
          If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
        11.  
          Inconsistent cluster state with management service down when disabling I/O fencing
        12.  
          Storage> tier move command failover of node is not working
        13.  
          Storage> scanbus operation hangs at the time of I/O fencing operation
        14.  
          Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
        15.  
          Event messages are not generated when cache objects get full
        16.  
          Veritas Access CLISH interface should not allow uncompress and compress operations to run on the same file at the same time
        17.  
          Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
        18.  
          Storage> tier move list command fails if one of the cluster nodes is rebooted
        19.  
          Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
        20.  
          When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
        21.  
          Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
        22.  
          Storage> fs addcolumn operation fails but error notification is not sent
        23.  
          Unable to create space-optimized rollback when tiering is present
        24.  
          Enabling fencing on a setup with volume manager objects present fails to import the disk group
        25.  
          For the rollback cache growto and growby operations, the cache size values cannot be specified in terms of g/G, m/M or k/K
        26.  
          File system creation fails when the pool contains only one disk
        27.  
          After starting the backup service, BackupGrp goes into FAULTED state on some nodes
        28.  
          A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
        29.  
          A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
        30.  
          File system creation fails with SSD pool
        31.  
          Storage> dedup commands are not supported on RHEL 7.x
        32.  
          A scale-out file system may be in offline state after the execution of Storage> fencing off/on command
      17. System issues
        1.  
          The System> ntp sync command without any argument does not appear to work correctly
  5. Getting help
    1.  
      Displaying the online Help
    2.  
      Displaying the man pages
    3.  
      Using the Veritas Access product documentation

Any direct NLM operations from CLISH can lead to system instability (IA-1640)

Do not perform any file-system related operations by CLISH on the Network Lock Manager (NLM), as it is used for internal purposes. If NLM is used, then Veritas Access cannot guarantee the stability of the cluster.