Veritas NetBackup™ Flex Scale Release Notes
- Getting help
- Features, enhancements, and changes
- Limitations
- Known issues
- Cluster configuration issues
- Disaster recovery issues
- Backup data present on the primary site before the time Storage Lifecycle Policies (SLP) was applied is not replicated to the secondary site
- When disaster recovery gets configured on the secondary site, the catalog storage usage may be displayed as zero
- Using the secondary storage unit for backup or duplication before disaster recovery configuration is complete, can result in data loss
- Catalog backup policy may fail or use the remote media server for backup
- The status of the master server catalog replication status appears as "needs failback synchronization" or "needs DCM resynchronization"
- In-domain disaster recovery configuration may fail if there are more than eight nodes in each site
- During disaster recovery, after takeover operation, replace node operation can fail on the newly formed secondary site
- After a takeover operation, data network operations may fail on the newly formed secondary site
- Takeover to a secondary cluster fails even after the primary cluster is completely powered off
- Infrastructure management issues
- FactoryReset operation exits without any message.
- Storage-related logs are not written to the designated log files
- Unable to start a node that is shut down
- Arrival or recovery of the volume does not bring the file system back into online state making the file system unusable
- Unable to replace a stopped node
- An NVMe disk is wrongly selected as a target disk while replacing a SAS SSD
- Disk replacement might fail in certain situations
- Adding a node using REST APIs fails when the new node tries to synchronize the EEBs or patch updates installed on the cluster
- Replacing an NVMe disk fails with a data movement from source disk to destination disk error
- When NetBackup Flex Scale is configured, the size of NetBackup logs might exceed the /log partition size
- Nodes may go into an irrecoverable state if shut down and reboot operations are performed using IPMI-based commands
- Replace node may fail if the new node is not reachable
- Miscellaneous issues
- If NetBackup Flex Scale is configured, the storage paths are not displayed under MSDP storage
- Failure may be observed on STU if the Only use the following media servers is selected for Media server under Storage > Storage unit
- Red Hat Virtualization (RHV) VM discovery and backup and restore jobs fail if the Media server node that is selected as the discovery host, backup host, or recovery host is replaced
- Primary server services fail if an nfs share is mounted at /mnt mount path inside the primary server container
- NetBackup fails to discover VMware workloads in an IPv6 environment
- Networking issues
- Upgrade issues
- After an upgrade, if checkpoint is restored, backup and restore jobs may stop working
- Upgrade fails during pre-flight in VCS service group checks even if the failover service group is ONLINE on a node, but FAULTED on another node
- During EEB installation, a hang is observed during the installation of the fourth EEB and the proxy log reports "Internal Server Error"
- EEB installation may fail if some of the NetBackup services are busy
- UI issues
- NetBackup Web UI is not accessible using the management server and API gateway IP or FQDN
- In-progress user creation tasks disappear from the infrastructure UI if the management console node restarts abruptly
- During the replace node operation, the UI wrongly shows that the replace operation failed because the data rebuild operation failed
- The maintenance user account password cannot be modified from the infrastructure UI
- Changes in the local user operations are not reflected correctly in the NetBackup GUI when the failover of the management console and the NetBackup primary occurs at the same time
- Mozilla Firefox browser may display a security issue while accessing the infrastructure UI
- Fixed issues
Takeover to a secondary cluster fails even after the primary cluster is completely powered off
The takeover to a secondary cluster fails even after primary cluster is completely powered off with the following error:
<cluster_name> cluster is either running or not completely down. Takeover of replication role is not permitted
In a rare scenario, the status of a remote cluster cannot be determined. The remote cluster status can be obtained from the console using the following command.
# haclus -display <remote_cluster_name> | grep ClusState
The takeover operation is permitted only when the remote cluster status is faulted or exited. But due to a bug, the remote cluster status shows the status as unknown even after the remote cluster is powered off. Also, if the remote cluster is powered off when the local cluster is down, the remote cluster status shows init. The takeover is not permitted in these scenarios. (4012004)
Workaround:
If the user can confirm that the remote cluster is down, then the takeover can be forced by running the following commands manually from the console.
Run the following command:
# /opt/VRTSnas/pysnas/bin/nso_replication.py --command update_master_server_etc_hosts --data '{"fqdn": "<master_fqdn>", "new_ip": "<new_master_ip>","old_ip": "<old_master_ip>"}'Update the DNS entry of master server FQDNs to the IPs of the new primary.
Run the following command:
# hagrp -online -propagate -force NBUMasterBrain -any
Run the following command:
# /opt/VRTSnas/scripts/rep/nso_replication.sh prepare primary
Run the following command:
# /opt/VRTSnas/pysnas/bin/nso_replication.py --command clear_host_cache
Verify that the master server (nbu_master) is online and healthy and all media servers are healthy.