Veritas NetBackup™ Flex Scale Release Notes
- Getting help
- Features, enhancements, and changes
- What's new in this release
- Enhancements to the cluster configuration workflow
- Additional settings in NetBackup Flex Scale web UI to manage appliance infrastructure and settings
- Managing vendor packages
- Authenticating users using smart card and certificates
- Support for parallel upgrade
- Monitoring deviation in firmware, driver, and utility versions
- Monitoring cluster configuration
- Locating appliance disks
- Including or replacing an excluded disk
- Monitoring disk sync status
- Including an excluded node
- Restarting a node
- Starting and stopping NetBackup containers
- Switching management console to another cluster node
- Enhancements to log package
- Configuring static routes on a NetBackup Flex Scale cluster
- Audit logs
- Enhancements to the add data network workflow
- Enhancements to DNS workflow
- Enhancements to primary service replication
- Restricting remote management access
- Universal share user role
- Cloud bucket support
- Changes to the VxOS shell
- Support for NetBackup Client
- What's new in this release
- Limitations
- Known issues
- Cluster configuration issues
- Cluster configuration fails if there is a conflict between the cluster private network and any other network
- Cluster configuration process may hang due to an ssh connection failure
- Node discovery fails during initial configuration if the default password is changed
- When NetBackup Flex Scale is configured, the size of NetBackup logs might exceed the /log partition size
- Error message is not displayed when NTP server is added as FQDN during initial configuration in a non-DNS environment
- All the selected EEBs are not uploaded during the initial configuration
- Disaster recovery issues
- Backup data present on the primary site before the time Storage Lifecycle Policies (SLP) was applied is not replicated to the secondary site
- If the replication link is down on a node, the replication IP does not fail over to another node
- Task output message for migration operation appears as 'Force migrate operation is successful'
- Disaster recovery configuration may take around 2.5 hours to complete when data-collect task runs in the backend
- After disaster recovery takeover operation, the old recovery points or checkpoints for the primary server catalog file system are not visible in the GUI on the new primary site
- SLP templates are not created post disaster recovery configuration if lockdown mode is set to compliance or enterprise
- Policies and SLPs do not revert back when the new secondary site joins after a takeover operation
- Disaster recovery configuration hangs when eth5/bond0 interface is down on the node where management console and CVM services are online on one or both sites
- Miscellaneous issues
- Red Hat Virtualization (RHV) VM discovery and backup and restore jobs fail if the Media server node that is selected as the discovery host, backup host, or recovery host is replaced
- The file systems offline operation gets stuck for more than 2hrs after a reboot all operation
- cvmvoldg agent causes resource faults because the database not updated
- SQLite, MySQL, MariaDB, PostgreSQL database backups fail in pure IPv6 network configuration
- Exchange GRT browse of Exchange-aware VMware policy backups may fail with a database error
- Call Home test fails if a proxy server is configured without specifying a user
- In a non-DNS NetBackup Flex Scale setup, performing a backup from a snapshot operation fails for NAS-Data-Protection policy
- In a non-DNS environment, the CRL check does not work if CDP URL is not accessible
- Unable to add multiple host entries against the same IP address and vice versa in a non-DNS IPv4 environment
- Multiple mkfifo-related warning messages are displayed when logging in to the Veritas Appliance Shell
- Incorrect information is displayed for the support health check command in an IPv6 environment
- Change in host time zone is not reflected within containers
- NetBackup issues
- The engines and media servers go into unhealthy state if the CRL file has expired
- NetBackup service start operation for only one service may fail
- The NetBackup web GUI does not list media or storage hosts in Security > Hosts page
- Media hosts do not appear in the search icon for Recovery host/target host during Nutanix AHV agentless files and folders restore
- On the NetBackup media server, the ECA health check shows the warning, 'hostname missing'
- If NetBackup Flex Scale is configured, the storage paths are not displayed under MSDP storage
- Failure may be observed on STU if the Only use the following media servers is selected for Media server under Storage > Storage unit
- NetBackup primary server services fail if an nfs share is mounted at /mnt mount path inside the primary server container
- NetBackup primary container goes into unhealthy state
- Alerts and AutoSupport case might be created for NetBackup services that are stopped intentionally
- Starting or stopping multiple NetBackup services may fail on a media-server only cluster
- User login fails from the NetBackup GUI with authentication failed error
- Monitoring of the primary, media and storage servers for high availability does not work
- MSDP engine and media server fail to come up
- Networking issues
- Cluster configuration workflow may get stuck
- Node panics when eth4 and eth6 network interfaces are disconnected
- Add node fails during precheck when a secondary data network is configured over the management interface and the Automatic tab is used for providing input IPs for the new node to be added for the secondary data network over management interface
- Static route does not get added if any node of the cluster is powered off or not up
- Add secondary data network operation fails on the management interface of the secondary site of a cluster when the management network on the secondary site is not the same as the management network on primary site and disaster recovery is configured using a single virtual IP
- Node and disk management issues
- Storage-related logs are not written to the designated log files
- Arrival or recovery of the volume does not bring the file system back into online state making the file system unusable
- Unable to replace a stopped node
- An NVMe disk is wrongly selected as a target disk while replacing a SAS SSD
- Disk replacement might fail in certain situations
- Replacing an NVMe disk fails with a data movement from source disk to destination disk error
- Unable to detect a faulted disk that is brought online after some time
- Nodes may go into an irrecoverable state if shut down and reboot operations are performed using IPMI-based commands
- Replace node may fail if the new node is not reachable
- Node is displayed as unhealthy if the node on which the management console is running is stopped
- Unable to collect logs from the node if the node where the management console is running is stopped
- Log rotation does not work for files and directories in /log/VRTSnas/log
- Unable to start or stop a cluster node
- Backup jobs of the workload which uses SSL certificate fail during or post Add node operation
- During an add node operation, the error shown on the Infrastructure page is not identical to the error seen when you view the task details
- Disk size is incorrectly shown as ? when an excluded disk is added back to the cluster
- Alerts about faulted disks are not resolved after disk or node replacement
- Number of online disks shown is incorrect after an OS disk goes offline
- After a replace node operation is performed on a deployment in which ECA is enabled, the universal share is not mounted on the new node's engine
- Health of the node does not change to unhealthy when disks are physically replaced
- AutoSupport settings are not synchronized on the newly added node
- Unhealthy node count is not updated when a node is shut down or stopped
- During an add node operation, you might be prompted to enter IPv6 addresses for a cluster with IPv4 addresses
- Incorrect error message shown when a node to be added restarts or panics
- Add node operation shows NetBackup configuration failure when a newly added node restarts during rebalancing of data
- Unhealthy disk are seen on the Infrastructure page after you delete a node from the cluster
- Disk might go in a faulted state after you include the same disk
- Proxy and etcd services do not came online when node shutdown fails
- After the management console node reboots, rollback of any running operations doesn't happen automatically
- Security and authentication issues
- NetBackup certificates tab and the External certificates tab in the Certificate management page on the NetBackup UI show different hosts list
- Replicated images do not have retention lock after the lockdown mode is changed from normal to any other mode
- Unable to switch the lockdown mode from normal to enterprise or compliance for a cluster that is deployed with only media servers and with lockdown mode set to normal
- User account gets locked on a management or non-management console node
- The changed password is not synchronized across the cluster
- Certificate renewal alert is not generated automatically during deployment
- Upgrade issues
- EEB installation may fail if some of the NetBackup services are busy
- During an upgrade the NetBackup Flex Scale UI shows incorrect status for some of the components
- Unable to upgrade from version 3.0.0.1 to 3.1 when an AD/LDAP server contains a maintenance user account and you attempt to change the maintenance user password
- Server busy error is displayed during an upgrade rollback
- Storage licensing check fails during a pre-upgrade check for a cluster where disaster recovery is configured
- After upgrade, MSDP cloud operations fail on an IPv6 setup if the cluster has MSDP cloud configured with data network in a non-DNS environment
- Add data network operation fails after an upgrade
- Alert about unsupported smartpqi driver is not resolved immediately after installing the kmod-smartpqi package
- UI issues
- During the replace node operation, the UI wrongly shows that the replace operation failed because the data rebuild operation failed
- Changes in the local user operations are not reflected correctly in the NetBackup GUI when the failover of the management console and the NetBackup primary occurs at the same time
- Mozilla Firefox browser may display a security issue while accessing the infrastructure UI
- Recent operations that were completed successfully are not reflected in the UI if the NetBackup Flex Scale management console fails over to another cluster node
- Previously generated log packages are not displayed if the infrastructure management console fails over to another node.
- Smart card authentication fails for a cluster that includes both primary and media servers with IPv6 configuration
- Multiple tasks appear to be running in parallel during an add node operation
- Incorrect search results are displayed when you search for EEBs on the Software management > Add-ons tab
- EEB installation fails with Server busy error if you attempt to install another EEB immediately after installing a previous EEB
- Include disk option is not disabled on an offline node
- Dashboard shows an unconfigured icon for the NetBackup primary server when its status is offline
- NetBackup primary server status is shown online on the dashboard after performing a stop containers operation
- Cluster configuration issues
- Fixed issues
Fixed issues in version 3.1
The following issues are fixed in this release:
Table:
ID | Description |
|---|---|
4051797 | Unable to view the login banner after an upgrade. |
4064943 | Upgrade from version 2.1 to 3.0 fails if the cluster is configured with an external certificate. |
APPSOL-154850 | After an upgrade, the proxy server configured for Call Home is disabled but is displayed as enabled in the UI. |
APPSOL-154989 | After an upgrade Call Home does not work. |
IA-27647 | An NVMe disk is wrongly selected as a target disk while replacing a SAS SSD. |
IA-30046 | When disaster recovery gets configured on the secondary site, the catalog storage usage may be displayed as zero. |
IA-30204 | Replacing an NVMe disk fails with a data movement from source disk to destination disk error. |
IA-30251 | Catalog backup policy may fail or use the remote media server for backup. |
4012004 | Takeover to a secondary cluster fails even after the primary cluster is completely powered off. |
IA-32032 | Empty log directories are created in the downloaded log file. |
IA-32201 and IA-32246 | For the private network, if you use the default IPv4 IP address but specify an IPv6 IP other than the default, the specifiedIPv6 IP address is ignored. |
IA-32203 | Catalog replication may fail to resume automatically after recovering from node fault that exceeds fault tolerance limit. |
IA-32612 | Add node fails because of memory fragmentation. |
IA-36540 | If both primary and secondary clusters are down and are brought online again, it may happen that the replication is in error state. |
IA-36090 | Unable to perform a takeover operation from the new site acting as the secondary |
IA-36990 | Rollback fails after a failed upgrade. |
IA-37443 | Add node operation hangs on the secondary site after an upgrade. |
IA-37656 | After replacing a node, the AutoSupport settings are not synchronized to the replacement node. |
IA-37405 and IA-37304 | Log rotation does not work for files and directories in |
IA-37062 | Node is displayed as unhealthy if the node on which the management console is running is stopped. |
IA-39351 | Enabling compliance mode for the first time on the secondary cluster may fail if disaster recovery is configured. |
IA-39899 | If disaster recovery is configured and an upgrade from NetBackup Flex Scale 2.1 to 3.0, the upgrade operation hangs. |
IA-40346 | On a NetBackup Flex Scale cluster with disaster recovery configuration, the replication state shows Primary-Primary on the faulted primary cluster after takeover. |
IA-39890 | AD/LDAP domain unreachable alerts do not get cleared after the AD/LDAP server is deleted |
IA-39642 | AD server test connection fails due to incorrect username on the IPV6 media only cluster |
IA-40140 | The Add nodes to the cluster button remains disabled even after providing all the inputs |
IA-40062 | After cluster reboot all/shutdown all operation, AD/LDAP domains become unreachable from one or more nodes on a NetBackup Flex Scale cluster on which only media servers are deployed |
IA-40058 | Assigning role to correct AD/LDAP user/group with wrong domain causes the user listing to fail |
IA-40218 | Alerts about inconsistent login banner and password policy appear after an upgrade. |
IA-40368 | Unable to add more than seven nodes simultaneously to the cluster. |
IA-40370 | Alerts about node being down are generated during an upgrade. |
IA-40381 | GUI takes a long time to update the status of the upgrade task. |
IA-40377 | Upgrade may fail if operations such as such as OS reboot, cluster restart, and node stop and shutdown are used during the upgrade. |
4019408 | NetBackup fails to discover VMware workloads in an IPv6 environment |
IA-24663 IA-31849 | DNS of container does not get updated when the DNS on the network is changed |
IA-26730 | Bond modify operation fails when you modify some bond mode options such as xmit_hash_policy |
4020899 | AD/LDAP configuration may fail for IPv6 addresses |
IA-30306 IA-40255 | Upgrade fails during pre-flight in VCS service group checks even if the failover service group is ONLINE on a node, but FAULTED on another node |
IA-40311 | In a disaster recovery environment, upgrade gets stuck during node evacuation stage as VVRInfra_Grp cannot be brought down |
IA-40597 | Upgrade may fail after node evacuation if a VCS parallel service group is OFFLINE on a partial set of nodes at the beginning of the upgrade |
IA-25874 | In-progress user creation tasks disappear from the infrastructure UI if the management console node restarts abruptly |
IA-40121 | CRL mode does not get updated on the secondary site after ECA is renewed on a cluster on which disaster recovery is configured |
IA-27537 | After an upgrade, if checkpoint is restored, backup and restore jobs may stop working |
IA-36129 | Disaster recovery configuration fails if the lockdown mode on the secondary cluster is enterprise or compliance |
4042920 | After an upgrade, the metadata format in cloud storage of MSDP cloud volume is changed |
IA-30521 | During EEB installation, a hang is observed during the installation of the fourth EEB and the proxy log reports "Internal Server Error" |
IA-40332 | GUI login fails with LDAP user if the domain is configured with SSL |