Veritas NetBackup™ Flex Scale Release Notes

Last Published:
Product(s): Appliances (3.0)
Platform: NetBackup Flex Scale OS
  1. Getting help
    1.  
      About this document
    2.  
      NetBackup Flex Scale resources
  2. Features, enhancements, and changes
    1. What's new in this release
      1.  
        Configuring NetBackup Flex Scale in a non-DNS environment
      2.  
        Managing NetBackup and cluster infrastructure from NetBackup Flex Scale UI
      3.  
        Configuring AD/LDAP servers on clusters deployed with only media servers
      4.  
        Restricting access to IPMI
      5.  
        Running a preupgrade check
    2.  
      Support for NetBackup Client
  3. Limitations
    1.  
      Software limitations
    2.  
      Unsupported features of NetBackup in NetBackup Flex Scale
  4. Known issues
    1. Cluster configuration issues
      1.  
        Cluster configuration fails if there is a conflict between the cluster private network and any other network
      2.  
        Cluster configuration process may hang due to an ssh connection failure
      3.  
        DNS servers that are added after initial configuration are not present in the /etc/resolv.conf file
      4.  
        Empty log directories are created in the downloaded log file
      5.  
        For the private network, if you use the default IPv4 IP address but specify an IPv6 IP other than the default, the specified IPv6 IP address is ignored
      6.  
        Node discovery fails during initial configuration if the default password is changed
      7.  
        When NetBackup Flex Scale is configured, the size of NetBackup logs might exceed the /log partition size
      8.  
        Error message is not displayed when NTP server is added as FQDN during initial configuration in a non-DNS environment
    2. Disaster recovery issues
      1.  
        Backup data present on the primary site before the time Storage Lifecycle Policies (SLP) was applied is not replicated to the secondary site
      2.  
        When disaster recovery gets configured on the secondary site, the catalog storage usage may be displayed as zero
      3.  
        Catalog backup policy may fail or use the remote media server for backup
      4.  
        Takeover to a secondary cluster fails even after the primary cluster is completely powered off
      5.  
        Catalog replication may fail to resume automatically after recovering from node fault that exceeds fault tolerance limit
      6.  
        If the replication link is down on a node, the replication IP does not fail over to another node
      7.  
        If both primary and secondary clusters are down and are brought online again, it may happen that the replication is in error state
      8.  
        Disaster recovery configuration fails if the lockdown mode on the secondary cluster is enterprise or compliance
      9.  
        Unable to perform a takeover operation from the new site acting as the secondary
      10.  
        Enabling compliance mode for the first time on the secondary cluster may fail if disaster recovery is configured
      11.  
        If disaster recovery is configured and an upgrade from NetBackup Flex Scale 2.1 to 3.0, the upgrade operation hangs
      12.  
        On a NetBackup Flex Scale cluster with disaster recovery configuration, the replication state shows Primary-Primary on the faulted primary cluster after takeover
      13.  
        Unable to create universal shares on a cluster on which disaster recovery is configured or only media server is deployed
    3. Miscellaneous issues
      1.  
        Red Hat Virtualization (RHV) VM discovery and backup and restore jobs fail if the Media server node that is selected as the discovery host, backup host, or recovery host is replaced
      2.  
        The file systems offline operation gets stuck for more than 2hrs after a reboot all operation
      3.  
        cvmvoldg agent causes resource faults because the database not updated
      4.  
        SQLite, MySQL, MariaDB, PostgreSQL database backups fail in pure IPv6 network configuration
      5.  
        Exchange GRT browse of Exchange-aware VMware policy backups may fail with a database error
      6.  
        Call Home test fails if a proxy server is configured without specifying a user
      7.  
        In a non-DNS NetBackup Flex Scale setup, performing a backup from a snapshot operation fails for NAS-Data-Protection policy
      8.  
        In a non-DNS environment, the CRL check does not work if CDP URL is not accessible
      9.  
        Unable to add multiple host entries against the same IP address and vice versa in a non-DNS IPv4 environment
    4. NetBackup issues
      1.  
        The NetBackup web GUI does not list media or storage hosts in Security > Hosts page
      2.  
        Media hosts do not appear in the search icon for Recovery host/target host during Nutanix AHV agentless files and folders restore
      3.  
        On the NetBackup media server, the ECA health check shows the warning, 'hostname missing'
      4.  
        If NetBackup Flex Scale is configured, the storage paths are not displayed under MSDP storage
      5.  
        Failure may be observed on STU if the Only use the following media servers is selected for Media server under Storage > Storage unit
      6.  
        NetBackup primary server services fail if an nfs share is mounted at /mnt mount path inside the primary server container
      7.  
        NetBackup primary container goes into unhealthy state
      8.  
        NetBackup fails to discover VMware workloads in an IPv6 environment
    5. Networking issues
      1.  
        DNS of container does not get updated when the DNS on the network is changed
      2.  
        Cluster configuration workflow may get stuck
      3.  
        Bond modify operation fails when you modify some bond mode options such as xmit_hash_policy
      4.  
        Node panics when eth4 and eth6 network interfaces are disconnected
    6. Node and disk management issues
      1.  
        Storage-related logs are not written to the designated log files
      2.  
        Arrival or recovery of the volume does not bring the file system back into online state making the file system unusable
      3.  
        Unable to replace a stopped node
      4.  
        An NVMe disk is wrongly selected as a target disk while replacing a SAS SSD
      5.  
        Disk replacement might fail in certain situations
      6.  
        Replacing an NVMe disk fails with a data movement from source disk to destination disk error
      7.  
        Unable to detect a faulted disk that is brought online after some time
      8.  
        Nodes may go into an irrecoverable state if shut down and reboot operations are performed using IPMI-based commands
      9.  
        Add node fails because of memory fragmentation
      10.  
        Replace node may fail if the new node is not reachable
      11.  
        Node is displayed as unhealthy if the node on which the management console is running is stopped
      12.  
        Unable to collect logs from the node if the node where the management console is running is stopped
      13.  
        Log rotation does not work for files and directories in /log/VRTSnas/log
      14.  
        After replacing a node, the AutoSupport settings are not synchronized to the replacement node
      15.  
        Unable to start or stop a cluster node
      16.  
        The Add nodes to the cluster button remains disabled even after providing all the inputs
      17.  
        Unable to add more than seven nodes simultaneously to the cluster
      18.  
        Backup jobs of the workload which uses SSL certificate fail during or post Add node operation
    7. Security and authentication issues
      1.  
        NetBackup certificates tab and the External certificates tab in the Certificate management page on the NetBackup UI show different hosts list
      2.  
        Replicated images do not have retention lock after the lockdown mode is changed from normal to any other mode
      3.  
        Unable to switch the lockdown mode from normal to enterprise or compliance for a cluster that is deployed with only media servers and with lockdown mode set to normal
      4.  
        CRL mode does not get updated on the secondary site after ECA is renewed on a cluster on which disaster recovery is configured
      5.  
        Setting lockdown mode to enterprise or compliance fails on the secondary cluster of a NetBackup Flex Scale cluster on which disaster recovery is configured
      6.  
        User account gets locked on a management or non-management console node
      7.  
        The changed password is not synchronized across the cluster
    8. Upgrade issues
      1.  
        After an upgrade, if checkpoint is restored, backup and restore jobs may stop working
      2.  
        Upgrade fails during pre-flight in VCS service group checks even if the failover service group is ONLINE on a node, but FAULTED on another node
      3.  
        Upgrade from version 2.1 to 3.0 fails if the cluster is configured with an external certificate
      4.  
        During EEB installation, a hang is observed during the installation of the fourth EEB and the proxy log reports "Internal Server Error"
      5.  
        EEB installation may fail if some of the NetBackup services are busy
      6.  
        During an upgrade the NetBackup Flex Scale UI shows incorrect status for some of the components
      7.  
        After an upgrade Call Home does not work
      8.  
        After an upgrade, the proxy server configured for Call Home is disabled but is displayed as enabled in the UI
      9.  
        Unable to view the login banner after an upgrade
      10.  
        After an upgrade to NetBackup Flex Scale 2.1, the metadata format in cloud storage of MSDP cloud volume is changed
      11.  
        Rollback fails after a failed upgrade
      12.  
        Add node operation hangs on the secondary site after an upgrade
      13.  
        Alerts about inconsistent login banner and password policy appear after an upgrade from NetBackup Flex Scale version 2.1 to NetBackup Flex Scale 3.0
      14.  
        Alerts about node being down are generated during an upgrade
      15.  
        GUI takes a long time to update the status of the upgrade task
      16.  
        In a disaster recovery environment, upgrade gets stuck during node evacuation stage as VVRInfra_Grp cannot be brought down
      17.  
        Upgrade may fail after node evacuation if a VCS parallel service group is OFFLINE on a partial set of nodes at the beginning of the upgrade
      18.  
        Upgrade may fail if operations such as such as OS reboot, cluster restart, and node stop and shutdown are used during the upgrade
    9. UI issues
      1.  
        In-progress user creation tasks disappear from the infrastructure UI if the management console node restarts abruptly
      2.  
        During the replace node operation, the UI wrongly shows that the replace operation failed because the data rebuild operation failed
      3.  
        Changes in the local user operations are not reflected correctly in the NetBackup GUI when the failover of the management console and the NetBackup primary occurs at the same time
      4.  
        Mozilla Firefox browser may display a security issue while accessing the infrastructure UI
      5.  
        Recent operations that were completed successfully are not reflected in the UI if the NetBackup Flex Scale management console fails over to another cluster node
      6.  
        Previously generated log packages are not displayed if the infrastructure management console fails over to another node.
    10. User management issues
      1.  
        AD server test connection fails due to incorrect username on the IPV6 media only cluster
      2.  
        AD/LDAP domain unreachable alerts do not get cleared after the AD/LDAP server is deleted
      3.  
        GUI login fails with LDAP user if the domain is configured with SSL
      4.  
        Assigning role to correct AD/LDAP user/group with wrong domain causes the user listing to fail
      5.  
        After cluster reboot all/shutdown all operation, AD/LDAP domains become unreachable from one or more nodes on a NetBackup Flex Scale cluster on which only media servers are deployed
  5. Fixed issues
    1.  
      AD/LDAP configuration may fail for IPv6 addresses
    2.  
      Fixed issues in version 3.0

Catalog replication may fail to resume automatically after recovering from node fault that exceeds fault tolerance limit

Catalog replication stops when a fault exceeds fault tolerance limit and is resumed automatically after the faults are restored. But in some cases, the NetBackup Flex Scale cluster may fail to resume the catalog replication.

The following disaster-recovery status API can be used to get the status of catalog replication and the replication status shows that it is paused due to network disconnection.

API:

GET /api/appliance/v1.0/disaster-recovery

Response:

replicationStatus: paused due to network disconnection

(IA-32203)

Workaround:

  1. Reboot all nodes of the NetBackup Flex Scale cluster on the site where the node fault tolerance exceeded the tolerance limit.

  2. Run the following command to restart the cluster nodes.

    # echo b > /proc/sysrq-trigger