Veritas NetBackup™ Flex Scale Administrator's Guide
- Product overview
- Viewing information about the NetBackup Flex Scale cluster environment
- NetBackup Flex Scale infrastructure management
- User management
- Directory services and certificate management
- Region settings management
- About NetBackup Flex Scale storage
- About Universal Shares
- Creating a Protection Point for a Universal Share
- Node and disk management
- License management
- NetBackup Flex Scale network management
- About network management
- Modifying DNS settings
- About bonding Ethernet interfaces
- Bonding operations
- Data network configurations
- NetBackup Flex Scale infrastructure monitoring
- Resiliency in NetBackup Flex Scale
- EMS server configuration
- Site-based disaster recovery in NetBackup Flex Scale
- About site-based disaster recovery in NetBackup Flex Scale
- Configuring disaster recovery using GUI
- Clearing the host cache
- Managing disaster recovery using GUI
- Performing disaster recovery using RESTful APIs
- Active-Active disaster recovery configuration
- NetBackup optimized duplication using Storage Lifecycle Policies
- NetBackup Flex Scale security
- Troubleshooting
- Services management
- Collecting logs for cluster nodes
- Checking and repairing storage
- Troubleshooting NetBackup Flex Scale issues
- If cluster configuration fails (for example because an IP address that was already in use is specified) and you try to reconfigure the cluster, the UI displays an error but the configuration process continues to run
- Validation error while adding VMware credentials to NetBackup
- NetBackup Web UI incorrectly displays some NetBackup Flex Scale processes as failed
- Unable to create BMR Shared Resource Tree (SRT) on NetBackup Flex Scale Appliance
- NetBackup configuration files are not persistent across operations that require restarting the system
- Appendix A. Configuring NetBackup optimized duplication
- Appendix B. Disaster recovery terminologies
- Appendix C. Configuring Auto Image Replication
Selecting or changing the lockdown mode
The user can select the lockdown mode during initial configuration. After cluster configuration, user has the option to see/change the lockdown mode using both GUI and REST APIs. The lockdown modes can be switched only if the engines are healthy. The user can switch between the following modes without any restriction:
From normal to enterprise mode
From normal to compliance mode
From enterprise to compliance mode
The user can set minimum and maximum retention time for backup images for enterprise and compliance mode only. Creation of images with retention time less than the minimum retention time or greater than the maximum retention time is not allowed. This minimum and maximum retention time should be set by the appliance administrator as per the retention requirement of their use case.
Once the lockdown mode is set, only Appliance administrators can change the lockdown mode.
The lockdown modes are maintained during upgrade.
Only the Appliance administrator can remove the retention locks if the lockdown mode is enterprise.
Only the users with appliance administrator role can disable retention or remove the retention lock using the MSDP Restrict Shell.
The user cannot change the mode if any existing operation is in progress.
If the mode is set to compliance mode, the administrator cannot change the mode to enterprise or normal mode.
If lockdown mode is set to compliance or enterprise for any node, it is not available for factory reset.
During add and replace node operations, the new node is automatically placed in the existing lockdown mode of the cluster. The lockdown mode of the replaced node is set to normal and the node is available for factory reset.
Cluster maintenance shell is enabled with two-factor authentication (2FA).
To access the root shell when lockdown mode is configured
- Log into Access CLISH on any node in the cluster.
- Run the generate-otp to get the OTP (valid for 2 hours) for the entire cluster.
- Open a ticket with Veritas Support to generate a security key. Set a Support password which will be later used to elevate to root.
- Log in to the NetBackup Flex Scale shell on any node in the cluster.
- Run the support unlock command. You will be prompted to enter a security key. Enter the security key that you got in step 3. Press Enter to elevate to the root in the current node (all other nodes remain locked).
- Run the support elevate command. You will be prompted to enter a support password. Enter the Support password set in step 3. Press Enter. Type the maintenance password to get into the root shell.
- Repeat steps 4 to 6 to get into the root shell of all other nodes.
- Run the support lock command on a specific node to lock that node. You can run the lock command from the Access CLISH to lock all the nodes in the cluster at the same time. If no manual lock is issued, the node is locked automatically after 12 hours. All the current users are removed from the root shell in a single node or on all the nodes of the cluster.