Please enter search query.
Search <book_title>...
Veritas Access Appliance 8.2 Release Notes
Last Published:
2024-06-03
Product(s):
Appliances (8.2)
Platform: Veritas 3340,Veritas 3350,Veritas 3360
- Overview of Access Appliance
- About this release
- Changes in this release
- Changes to the initial configuration workflow
- Configuring multifactor authentication
- Configuring single sign-on (SSO)
- Security meter
- Configuring isolated recovery environment (IRE)
- Support for multiple versions of Veritas Data Deduplication
- Support for new hardware model
- End of life for Wolf Pass server chassis
- Stacking subscription licenses
- Enhancements to logging
- Deprecated functionality in this release
- Supported NetBackup client versions
- Access Appliance simple storage service (S3) APIs
- Software limitations
- Limitations related to CIFS
- Limitations related to installation and upgrade
- Limitation related to cloud tiering
- Limitations related to networking
- Limitations related to Veritas Data Deduplication
- Kernel-based NFS v4 limitations
- File system limitation
- Access Appliance S3 server limitation
- Long-term data retention (LTR) limitations
- Limitation related to replication
- Limitations related to user management
- Known issues
- Access Appliance known issues
- CIFS issues
- CIFS share created on Access Appliance 8.x version may not be accessible if all the virtual IPs are on a VLAN device
- Error message is displayed when the CIFS server is restarted if shares are already configured and the clustering mode is changed to ctdb
- Share does not get updated with modified IP if the IP used by a CIFS share is modified using the network ip addr modify command
- General issues
- Reimaging the appliance from the SSD device fails if a CD with the ISO image is inserted in the CD-ROM
- User is not logged out from the command-line interface after running the Cluster> stop all command
- User account gets locked on a management or non-management console node
- Addition of a user named 'admin' fails from GUI and CLISH
- Unable to turn on the beacon to locate compute node OS disks in an Access 3360 Appliance
- Unable to assign admin role to AD/LDAP user if multifactor authentication is configured
- GUI issues
- When provisioning storage, the Access web interface or the command-line interface displays storage capacity in MB, GB, TB, or PB
- GUI stops working after ECA deployment
- GUI stops working if the ECA certificates are not renewed or get revoked
- When an account gets locked because incorrect login attempts were exceeded, log in from GUI fails even though unlock time is exceeded
- Infrastructure issues
- The NetBackup client add-on package is not automatically installed on the node when the node is deleted from the cluster and then added to the cluster again.
- The Access Appliance management console is not available after a node is deleted and the remaining node is restarted
- Add node fails if upgrade is performed from 7.4.2.200 or lower version
- Installation and configuration issues
- Internationalization (I18N) issues
- MSDP-C issues
- Networking issues
- CVM service group goes into faulted state unexpectedly
- The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
- The network ip addr show command does not display all the FQDN entries for an IP address
- All the IRE rules are lost if the firewall disable/enable commands are executed when the cluster is in IRE domain
- Unable to log in to IPMI intermittently
- NFS issues
- ObjectAccess issues
- If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Access Appliance
- Self test failed for storage_s3test
- The Object Access server crashes whenever any operation on the Object Access server requires authentication with the AD server using an AD user
- Replication issues
- When running episodic replication and deduplication on the same cluster node, the episodic replication job fails in certain scenarios
- The System> config import command does not import episodic replication keys and jobs
- The job uses the schedule on the target after episodic replication failover
- Unplanned failover from source to target cluster might fail with episodic replication
- All files are not moved from the cloud tier to the primary tier when you run a file system policy
- Episodic replication fails with error "connection reset by peer" if the target node fails over
- Episodic replication job with encryption fails after job remove and add link with SSL certificate error
- Episodic replication job status shows the entry for a link that was removed
- If a share is created in RW mode on the target file system for episodic replication, then it may result in there being different number of files and directories on the target file system compared to the source file system
- The promote operation may fail while performing episodic replication job failover/failback
- Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
- Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
- Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
- If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
- Cannot use a file system to create an RVG if it has previously been enabled as the first file system in an RVG and later disabled
- STIG issues
- Storage issues
- Destroying the file system may not remove the /etc/mtab entry for the mount point
- The Storage> fs online command returns an error, but the file system is online after several minutes
- Rollback refresh fails when running it after running Storage> fs growby or growto commands
- Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
- Event messages are not generated when cache objects get full
- Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
- When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
- Storage> fs addcolumn operation fails but error notification is not sent
- Unable to create space-optimized rollback when tiering is present
- The CVM service group goes in to faulted state after you restart the management console node
- The cluster shutdown command does not shut down the node
- Upgrade issues
- Stale file handle error is displayed during rolling upgrade
- The system config import command does not work as expected
- Upgrade operation may fail with an Object Access access-key error
- Upgrade to 8.2 version fails during precheck if the Object access service is enabled but an S3 bucket is not created
- After multi-step upgrade from 7.4.2 to 8.2 eth1 device is not listed in the network ip add show output
- Checkpoints might remain in the file systems used by Veritas Data Deduplication (VDD) after the system rolls back after an upgrade failure
- Veritas Data Deduplication issues
- Provisioning for Veritas Data Deduplication is displayed as failed in GUI
- During reconfiguration of Veritas Data Deduplication, the specified username is not considered
- If you run the storage fs offline command on a file system on which Veritas Data Deduplication is configured while the dedupe shrink operation is in progress, it may lead to an incorrect configuration of the duplication server
- After upgrade, Veritas Data Deduplication configuration fails as the virtual IP association with deduplication is being used by CIFS
- Starting deduplication services using the Veritas Data Deduplication 19.0.1 restricted shell fails
- Backup jobs fail when Veritas Data Deduplication version 16.0 is configured with encryption
- Configuration of Veritas Data Deduplication version 19.0.1 WORM container fails
- Earlier set outbound connection rules get removed if the cluster is restarted after dedupe stop command
- The MSDP restricted shell command to set minimum and maximum retention period displays an error even after the command is completed successfully
- Reconfiguration of Veritas Data Deduplication 19.0.1 version with securecomm enabled option displays an error
- Access Appliance operational notes
- CIFS issues
- Access Appliance known issues
- Getting help
Access services do not restart properly after storage shelf restart
If the Veritas Access 3340 Appliance loses connectivity to an attached Primary or Expansion storage shelf, the underlying storage connectivity is lost and the VxVM disk group goes into a deported state. This issue occurs whenever a storage shelf intentionally or unintentionally restarts. To correct this issue, you need to restart the Access services.
To restart the Access services after the appliance storage shelves restart
- Log onto the Access shell menu over the console IP address.
- Run the following command to import the VxVM disk group and other Access configurations:
ltrcluster> storage scanbus
- Restart the services that were configured before the storage shelf restart.
For example, if the S3 server is configured, use the following commands
ltrcluster> objectaccess server status ObjectAccess Status on ltrcluster_01 : OFFLINE|FAULTED ObjectAccess Status on ltrcluster_02 : OFFLINE|FAULTED ltrcluster> objectaccess server stop ACCESS ObjectAccess ERROR V-493-10-4 ObjectAccess server already stopped. ltrcluster> objectaccess server start ACCESS ObjectAccess SUCCESS V-493-10-4 ObjectAccess started successfully. ltrcluster> objectaccess server status ObjectAccess Status on ltrcluster_01 : ONLINE ObjectAccess Status on ltrcluster_02 : ONLINE ltrcluster>