Veritas Access Appliance Release Notes

Last Published:
Product(s): Appliances (7.4.2)
Platform: 3340
  1. Overview of Veritas Access
    1.  
      About this release
    2.  
      Changes in this release
    3.  
      Supported NetBackup client versions
    4. Technical preview features
      1.  
        Veritas Access Streamer as a storage type for Enterprise Vault
      2.  
        Support for erasure coding in a scale-out file system for an LTR use case over the S3 protocol
    5.  
      Veritas Access simple storage service (S3) APIs
    6.  
      Required OS and third-party RPMs
  2. Software limitations
    1.  
      Limitations on using shared LUNs
    2. Limitations related to installation and upgrade
      1.  
        If the required virtual IPs are not configured, then services like NFS, CIFS, and S3 do not function properly
    3.  
      Limitations in the Backup mode
    4.  
      Veritas Access IPv6 limitations
    5.  
      FTP limitations
    6.  
      Limitations related to commands in a non-SSH environment
    7.  
      Limitations related to Veritas Data Deduplication
    8.  
      NFS-Ganesha limitations
    9.  
      Kernel-based NFS v4 limitations
    10.  
      File system limitation
    11.  
      Veritas Access S3 server limitation
    12.  
      Long-term data retention (LTR) limitations
    13.  
      Cloud tiering limitation
    14. Limitation related to replication
      1.  
        Limitation related to episodic replication authentication
      2.  
        Limitation related to continuous replication
  3. Known issues
    1. Veritas Access known issues
      1. Access Appliance issues
        1.  
          Mongo service does not start after a new node is added successfully
        2.  
          File systems that are already created cannot be mapped as S3 buckets for local users using the GUI
        3.  
          The Veritas Access management console is not available after a node is deleted and the remaining node is restarted
        4.  
          When provisioning the Veritas Access GUI, the option to generate S3 keys is not available after the LTR policy is activated
        5.  
          Unable to add an Appliance node to the cluster again after the Appliance node is turned off and removed from the Veritas Access cluster
        6.  
          Setting retention on a directory path does not work from the Veritas Access command-line interface
        7.  
          During the Access Appliance upgrade, I/O gets paused with an error message
        8.  
          When provisioning storage, the Access web interface or the command-line interface displays storage capacity in MB, GB, TB, or PB
        9. Access Appliance operational notes
          1.  
            Access services do not restart properly after storage shelf restart
      2. Admin issues
        1.  
          The user password gets displayed in the logs for the Admin> user add username system-admin|storage-admin|master command
      3. CIFS issues
        1.  
          Cannot enable the quota on a file system that is appended or added to the list of homedir
        2.  
          Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
        3.  
          Default CIFS share has owner other than root
        4.  
          CIFS mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
        5.  
          CIFS share may become unavailable when the CIFS server is in normal mode
        6.  
          CIFS share creation does not authenticate AD users
      4. General issues
        1.  
          A functionality of Veritas Access works from the master node but does not work from the slave node
      5. GUI issues
        1.  
          When both continuous and episodic replication links are set up, provisioning of storage using High Availability and Data Protection policies does not work
        2.  
          When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
        3.  
          When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
        4.  
          Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
        5.  
          While provisioning an S3 bucket for NetBackup, the bucket creation fails if the device protection is selected as erasurecoded and the failure domain is selected as disk
        6.  
          Client certificate validation using OpenSSL ocsp does not work on RHEL 7
        7.  
          When you perform the set LDAP operation using the GUI, the operation fails with an error
        8.  
          GUI does not support segregated IPv6 addresses while creating CIFS shares using the Enterprise Vault policy
      6. Installation and configuration issues
        1.  
          After you restart a node that uses RDMA LLT, LLT does not work, or the gabconfig - a command shows the jeopardy state
        2.  
          Running individual Veritas Access scripts may return inconsistent return codes
        3.  
          Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
        4.  
          If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
        5.  
          If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
        6.  
          Phantomgroup for the VLAN device does not come online if you create another VLAN device from the Veritas Access command-line interface after cluster configuration is done
        7.  
          Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
        8.  
          Configuring Veritas Access with a preconfigured VLAN and a preconfigured bond fails
        9.  
          In a mixed mode Veritas Access cluster, after the execution of the Cluster> add node command, one type of unused IP does not get assigned as a physical IP to public NICs
        10.  
          NLMGroup service goes into a FAULTED state when the private IP (x.x.x.2) is not free
        11.  
          The cluster> show command does not detect all the nodes of the cluster
      7. Internationalization (I18N) issues
        1.  
          The Veritas Access command-line interface prompt disappears when characters in a foreign language are present in a command
      8. Networking issues
        1.  
          CVM service group goes into faulted state unexpectedly
        2.  
          In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
        3.  
          The netgroup search does not continue to search in NIS if the entry is not found in LDAP
        4.  
          The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
        5.  
          After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
        6.  
          Unable to import the network module after an operating system upgrade
        7.  
          LDAP with SSL on option does not work if you upgrade Veritas Access
        8.  
          Network load balancer does not get configured with IPv6
        9.  
          Unable to add an IPv6-default gateway on an IPv4-installed cluster
        10.  
          LDAP over SSL may not work in Veritas Access 7.4.2
        11.  
          The network> swap command hangs if any node other than the console node is specified
      9. NFS issues
        1.  
          Slow performance with Solaris 10 clients with NFS-Ganesha version 4
        2.  
          Random-write performance drop of NFS-Ganesha with Linux clients
        3.  
          Latest directory content of server is not visible to the client if time is not synchronized across the nodes
        4.  
          NFS> share show may list the shares as faulted for some time if you restart the cluster node
        5.  
          NFS-Ganesha shares faults after the NFS configuration is imported
        6.  
          NFS-Ganesha shares may not come online when the number of shares are more than 500
        7.  
          Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
        8.  
          For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
        9.  
          NFS client application may fail with the stale file handle error on node reboot
        10.  
          NFS> share show command does not distinguish offline versus online shares
        11.  
          Difference in output between NFS> share show and Linux showmount commands
        12.  
          NFS mount on client is stalled after you switch the NFS server
        13.  
          Kernel-NFS v4 lock failover does not happen correctly in case of a node crash
        14.  
          Kernel-NFS v4 export mount for Netgroup does not work correctly
        15.  
          NFS-Ganesha share for IPv6 subnet does not work and NFS share becomes faulted
        16.  
          When a file system goes into the FAULTED or OFFLINE state, the NFS share groups associated with the file system do not become offline on all the nodes
      10. ObjectAccess issues
        1.  
          When trying to connect to the S3 server over SSLS3, the client application may give a warning
        2.  
          If you have upgraded to Veritas Access 7.4.2 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
        3.  
          If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
        4.  
          Bucket creation may fail with time-out error
        5.  
          Bucket deletion may fail with "No such bucket" or "No such key" error
        6.  
          Group configuration does not work in ObjectAccess if the group name contains a space
      11. OpenDedup issues
        1.  
          The file system storage is not reclaimed after deletion of an OpenDedup volume
        2.  
          The Storage> fs online command fails with an EBUSY error
        3.  
          Output mismatch in the df -h command for OpenDedup volumes that are backed by a single bucket and mounted on two different media servers
        4.  
          The OpenDedup> volume create command does not revert the changes if the command fails during execution
        5.  
          Some of the OpenDedup volume stats reset to zero after upgrade
        6.  
          OpenDedup volume mount operation fails with an error
        7.  
          Restore of data from AWS glacier fails
        8.  
          OpenDedup volumes are not online after an OpenDedup upgrade if there is a change in the cluster name
        9.  
          If the Veritas Access master node is restarted when a restore job is in progress and OpenDedup resides on the media server, the restored files may be in inconsistent state
        10.  
          The OpenDedup> volume list command may not show the node IP for a volume
        11.  
          When Veritas Access is configured in mixed mode, the configure LTR script randomly chooses a virtual IP from the available Veritas Access virtual IPs
      12. OpenStack issues
        1.  
          Cinder and Manila shares cannot be distinguished from the Veritas Access command-line interface
        2.  
          Cinder volume creation fails after a failure occurs on the target side
        3.  
          Cinder volume may fail to attach to the instance
        4.  
          Bootable volume creation for an iSCSI driver fails with an I/O error when a qcow image is used
      13. Replication issues
        1.  
          When running episodic replication and deduplication on the same cluster node, the episodic replication job fails in certain scenarios
        2.  
          The System> config import command does not import episodic replication keys and jobs
        3.  
          The job uses the schedule on the target after episodic replication failover
        4.  
          Episodic replication fails with error "connection reset by peer" if the target node fails over
        5.  
          Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
        6.  
          Setting the bandwidth through the GUI is not enabled for episodic replication
        7.  
          Episodic replication job with encryption fails after job remove and add link with SSL certificate error
        8.  
          Episodic replication job status shows the entry for a link that was removed
        9.  
          Episodic replication job modification fails
        10.  
          Episodic replication failover does not work
        11.  
          Continuous replication fails when the 'had' daemon is restarted on the target manually
        12.  
          Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
        13.  
          Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
        14.  
          Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
        15.  
          If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
      14. SDS known issues
        1.  
          After the SDS log is rotated, the log messages from either Veritas Access or the SDS plugin go to the rotated file instead of the new file
      15. Storage issues
        1.  
          Snapshot mount can fail if the snapshot quota is set
        2.  
          Sometimes the Storage> pool rmdisk command does not print a message
        3.  
          The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
        4.  
          Not able to enable quota for file system that is newly added in the list of CIFS home directories
        5.  
          Destroying the file system may not remove the /etc/mtab entry for the mount point
        6.  
          The Storage> fs online command returns an error, but the file system is online after several minutes
        7.  
          Removing disks from the pool fails if a DCO exists
        8.  
          Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
        9.  
          Rollback refresh fails when running it after running Storage> fs growby or growto commands
        10.  
          If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
        11.  
          Inconsistent cluster state with management service down when disabling I/O fencing
        12.  
          Storage> tier move command failover of node is not working
        13.  
          Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
        14.  
          Event messages are not generated when cache objects get full
        15.  
          The Veritas Access command-line interface does not block uncompress and compress operations from running on the same file at the same time
        16.  
          Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
        17.  
          Storage> tier move list command fails if one of the cluster nodes is rebooted
        18.  
          Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
        19.  
          When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
        20.  
          Storage> fs addcolumn operation fails but error notification is not sent
        21.  
          Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
        22.  
          Unable to create space-optimized rollback when tiering is present
        23.  
          Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
        24.  
          File system creation fails when the pool contains only one disk
        25.  
          After starting the backup service, BackupGrp goes into FAULTED state on some nodes
        26.  
          A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
        27.  
          A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
        28.  
          File system creation fails with SSD pool
        29.  
          A scale-out file system may go into faulted state after the execution of Storage> fencing off/on command
        30.  
          After an Azure tier is added to a scale-out file system, you cannot move files to the Azure tier and the Storage> tier stats command may fail
        31.  
          The CVM service group goes in to faulted state after you restart the management console node
        32.  
          The Storage> fs create command does not display the output correctly if one of the nodes of the cluster is in unknown state
        33.  
          Storage> fs growby and growto commands fail if the size of the file system or bucket is full
        34.  
          The operating system names of fencing disks are not consistent across the Veritas Access cluster that may lead to issues
        35.  
          The disk group import operation fails and all the services go into failed state when fencing is enabled
        36.  
          While creating an erasure-coded file system, a misleading message leads to issues in the execution of the storage> fs create command
        37.  
          The Veritas Access cluster node can get explicitly ejected or aborted from the cluster during recovery when another node joins the cluster after a restart
        38.  
          Error while creating a file system stating that the CVM master and management console are not on the same node
        39.  
          When you configure disk-based fencing, the cluster does not come online after you restart the node
        40.  
          In an erasure-coded file system, when the nodes are restarted, some of the file systems do not get unmounted
        41.  
          After a node is restarted, the vxdclid process may generate core dump
        42.  
          The Veritas Access command-line interface may be inaccessible after some nodes in the cluster are restarted
        43.  
          The cluster> shutdown command does not shut down the node
      16. System issues
        1.  
          The System> ntp sync command without any argument does not appear to work correctly
        2.  
          The NTP server does not get configured correctly during installation
      17. Target issues
        1.  
          Storage provisioning commands hang on the Veritas Access initiator when LUNs from the Veritas Access target are being used
        2.  
          After the Veritas Access cluster recovers from a storage disconnect, the iSCSI LUNs exported from Veritas Access as an iSCSI target may show the wrong content on the initiator side
      18. Upgrade issues
        1.  
          During rolling upgrade, Veritas Access shutdown does not complete successfully
        2.  
          CVM is in FAULTED state after you perform a rolling upgrade
        3.  
          If rolling upgrade is performed when NFS v4 is configured using NFS lease, the system may hang
      19. Veritas Data Deduplication issues
        1.  
          The Veritas Data Deduplication storage server does not come online on a newly added node in the cluster if the node was offline when you configured deduplication
        2.  
          The Veritas Data Deduplication server goes offline after destroying the bond interface on which the deduplication IP was online
        3.  
          If you grow the deduplication pool using the fs> grow command, and then try to grow it further using the dedupe> grow command, the dedupe> grow command fails
        4.  
          The Veritas Data Deduplication server goes offline after bond creation using the interface of the deduplication IP
  4. Getting help
    1.  
      Displaying the Online Help
    2.  
      Displaying the man pages
    3.  
      Using the Veritas Access product documentation

The System> config import command does not import episodic replication keys and jobs

The System> config import command imports the configuration that is exported by the System> config export command. In the importing process, the episodic replication repunits and schedules are imported correctly. The command fails to import the keys and jobs.

(3822515)

Workaround:

First run the Replication> episodic config import command, and then perform the following steps.

  1. Make sure the new target binds the episodic replication IP, because the episodic replication IP is not changed on the new source.
  2. Run the Replication> episodic config import_keys command on the source and the target.
  3. Run the Replication> episodic config auth command on the source and the target.
  4. Delete the job directory from the new source /shared/replication/jobs # rm - rf jobname/.
  5. Create the job from the new source.