Veritas NetBackup™ Flex Scale Release Notes
- Getting help
- Features, enhancements, and changes
- Limitations
- Known issues
- Cluster configuration issues
- Disaster recovery issues
- Backup data present on the primary site before the time Storage Lifecycle Policies (SLP) was applied is not replicated to the secondary site
- When disaster recovery gets configured on the secondary site, the catalog storage usage may be displayed as zero
- Using the secondary storage unit for backup or duplication before disaster recovery configuration is complete, can result in data loss
- Catalog backup policy may fail or use the remote media server for backup
- The status of the master server catalog replication status appears as "needs failback synchronization" or "needs DCM resynchronization"
- In-domain disaster recovery configuration may fail if there are more than eight nodes in each site
- During disaster recovery, after takeover operation, replace node operation can fail on the newly formed secondary site
- After a takeover operation, data network operations may fail on the newly formed secondary site
- Takeover to a secondary cluster fails even after the primary cluster is completely powered off
- Infrastructure management issues
- FactoryReset operation exits without any message.
- Storage-related logs are not written to the designated log files
- Unable to start a node that is shut down
- Arrival or recovery of the volume does not bring the file system back into online state making the file system unusable
- Unable to replace a stopped node
- An NVMe disk is wrongly selected as a target disk while replacing a SAS SSD
- Disk replacement might fail in certain situations
- Adding a node using REST APIs fails when the new node tries to synchronize the EEBs or patch updates installed on the cluster
- Replacing an NVMe disk fails with a data movement from source disk to destination disk error
- When NetBackup Flex Scale is configured, the size of NetBackup logs might exceed the /log partition size
- Nodes may go into an irrecoverable state if shut down and reboot operations are performed using IPMI-based commands
- Replace node may fail if the new node is not reachable
- Miscellaneous issues
- If NetBackup Flex Scale is configured, the storage paths are not displayed under MSDP storage
- Failure may be observed on STU if the Only use the following media servers is selected for Media server under Storage > Storage unit
- Red Hat Virtualization (RHV) VM discovery and backup and restore jobs fail if the Media server node that is selected as the discovery host, backup host, or recovery host is replaced
- Primary server services fail if an nfs share is mounted at /mnt mount path inside the primary server container
- NetBackup fails to discover VMware workloads in an IPv6 environment
- Networking issues
- Upgrade issues
- After an upgrade, if checkpoint is restored, backup and restore jobs may stop working
- Upgrade fails during pre-flight in VCS service group checks even if the failover service group is ONLINE on a node, but FAULTED on another node
- During EEB installation, a hang is observed during the installation of the fourth EEB and the proxy log reports "Internal Server Error"
- EEB installation may fail if some of the NetBackup services are busy
- UI issues
- NetBackup Web UI is not accessible using the management server and API gateway IP or FQDN
- In-progress user creation tasks disappear from the infrastructure UI if the management console node restarts abruptly
- During the replace node operation, the UI wrongly shows that the replace operation failed because the data rebuild operation failed
- The maintenance user account password cannot be modified from the infrastructure UI
- Changes in the local user operations are not reflected correctly in the NetBackup GUI when the failover of the management console and the NetBackup primary occurs at the same time
- Mozilla Firefox browser may display a security issue while accessing the infrastructure UI
- Fixed issues
The status of the master server catalog replication status appears as "needs failback synchronization" or "needs DCM resynchronization"
This issue occurs when in-domain disaster recovery is configured between two NetBackup Flex Scale clusters. It is observed in the following two scenarios:
The master server catalog replication status shows
needs dcm resynchronizationwhen the replication mode is switched to data change map (DCM) due to a temporary network disconnection or slow network bandwidth.The master server catalog replication status shows
needs failback synchronizationwhen the user performs a replication role takeover after a primary cluster fault and the faulted cluster is back online.
In these scenarios, a consistent copy of the master server catalog is made before resynchronization is performed. The copy is destroyed once the resynchronization is complete. The resynchronization is not started if cluster reconfiguration is in-progress. But there is a bug in the procedure which checks if the cluster reconfiguration is in-progress, due to which the resynchronization is never performed. (IA-30512)
Workaround:
Make sure that no infrastructure changes are in- progress.
Remove the following file from the secondary cluster (the new secondary, in case of takeover).
/shared/isagui/cluster_config/cluster_reconfig.lock
The resynchronization is initiated after this file is removed.