Veritas NetBackup™ Flex Scale Administrator's Guide
- Product overview
- Viewing information about the NetBackup Flex Scale cluster environment
- NetBackup Flex Scale infrastructure management
- User management
- Directory services and certificate management
- Region settings management
- About NetBackup Flex Scale storage
- About Universal Shares
- Creating a Protection Point for a Universal Share
- Node and disk management
- License management
- NetBackup Flex Scale network management
- About network management
- Modifying DNS settings
- About bonding Ethernet interfaces
- Bonding operations
- Data network configurations
- NetBackup Flex Scale infrastructure monitoring
- Resiliency in NetBackup Flex Scale
- EMS server configuration
- Site-based disaster recovery in NetBackup Flex Scale
- About site-based disaster recovery in NetBackup Flex Scale
- Configuring disaster recovery using GUI
- Clearing the host cache
- Managing disaster recovery using GUI
- Performing disaster recovery using RESTful APIs
- Active-Active disaster recovery configuration
- NetBackup optimized duplication using Storage Lifecycle Policies
- NetBackup Flex Scale security
- Troubleshooting
- Services management
- Collecting logs for cluster nodes
- Checking and repairing storage
- Troubleshooting NetBackup Flex Scale issues
- If cluster configuration fails (for example because an IP address that was already in use is specified) and you try to reconfigure the cluster, the UI displays an error but the configuration process continues to run
- Validation error while adding VMware credentials to NetBackup
- NetBackup Web UI incorrectly displays some NetBackup Flex Scale processes as failed
- Unable to create BMR Shared Resource Tree (SRT) on NetBackup Flex Scale Appliance
- NetBackup configuration files are not persistent across operations that require restarting the system
- Appendix A. Configuring NetBackup optimized duplication
- Appendix B. Disaster recovery terminologies
- Appendix C. Configuring Auto Image Replication
Replacing a disk
If a disk of a node is in a faulted state, replace the disk to maintain the resiliency of the cluster. A cluster with up to five nodes can tolerate a loss of one node and a disk. A cluster with six or more nodes provides resiliency of two nodes, or two SSDs, or four HDDs. The NetBackup jobs continue to run as long as the fault tolerance is not exceeded. If the failures exceed the fault tolerance, the cluster runs in a degraded state where NetBackup and MSDP services might not be running on all the nodes.
Alerts are generated for faulty disks. See Viewing information about alerts. If Call Home is configured for your setup, diagnostic information is sent to the AutoSupport server.
Before you begin the disk replacement operation, ensure that replacement disk is the correct size and is formatted.
Warning:
Ensure that the following steps are followed correctly; else it might lead to data loss.
To replace a disk:
- Remove the faulty disk from the node and replace the faulty disk with a replacement disk.
Note:
On HPE ProLiant Server setup, you must power off the node before physically replacing the faulty NVMe SSD disk.
- Use any one of the following options to log in using the user account that you created when you configured the cluster:
Using an user account with an Appliance Administrator and NetBackup Administrator role, log in to the NetBackup web interface
https://ManagementServerIPorFQDNwhere ManagementServerIPorFQDN is the public IP address that you specified for the NetBackup Flex Scale management server and API gateway during the cluster configuration, and then in the left pane click Appliance management.Using an user account with an Appliance Administrator role, log in to the NetBackup Flex Scale web interface
https://ManagementServerIPorFQDN:14161where ManagementServerIPorFQDN is the public IP address that you specified for the NetBackup Flex Scale management server and API gateway during the cluster configuration.
- In the left navigation pane, click Monitor, and then click Infrastructure.
- Click Disks.
The disks of all the nodes are displayed. In the Status column, the failed disks are marked as faulted.
- Click the faulty disk that you want to replace, and then click Replace disk.
The replacement disk is detected and added to the node.
After the disk is replaced, data rebuild process begins and you can monitor the progress at the bottom of the screen when you click Infrastructure > Monitor > Disks. Time taken to rebuild the data depends on the amount of data.