Veritas NetBackup™ Flex Scale Administrator's Guide
- Product overview
- Viewing information about the NetBackup Flex Scale cluster environment
- NetBackup Flex Scale infrastructure management
- User management
- Considerations for managing NetBackup Flex Scale users
- Adding users
- Changing user password
- Removing users
- Considerations for configuring AD/LDAP
- Configuring AD server for Universal shares and Instant Access
- Configuring AD/LDAP servers for NetBackup services
- Configuring additional AD/LDAP servers for managing NetBackup services/Universal Shares/Instant Access
- Configuring AD/LDAP servers on clusters deployed with only media servers
- Directory services and certificate management
- Region settings management
- About NetBackup Flex Scale storage
- About Universal Shares
- Node and disk management
- License management
- User management
- NetBackup Flex Scale network management
- About network management
- Modifying DNS settings
- About bonding Ethernet interfaces
- Bonding operations
- Configuring NetBackup Flex Scale in a non-DNS environment
- Data network configurations
- NetBackup Flex Scale infrastructure monitoring
- Resiliency in NetBackup Flex Scale
- EMS server configuration
- Site-based disaster recovery in NetBackup Flex Scale
- About site-based disaster recovery in NetBackup Flex Scale
- Configuring disaster recovery using GUI
- Clearing the host cache
- Managing disaster recovery using GUI
- Performing disaster recovery using RESTful APIs
- Active-Active disaster recovery configuration
- NetBackup optimized duplication using Storage Lifecycle Policies
- NetBackup Flex Scale security
- Troubleshooting
- Services management
- Collecting logs for cluster nodes
- Checking and repairing storage
- Troubleshooting NetBackup Flex Scale issues
- If cluster configuration fails (for example because an IP address that was already in use is specified) and you try to reconfigure the cluster, the UI displays an error but the configuration process continues to run
- Validation error while adding VMware credentials to NetBackup
- NetBackup Web UI incorrectly displays some NetBackup Flex Scale processes as failed
- Unable to create BMR Shared Resource Tree (SRT) on NetBackup Flex Scale Appliance
- NetBackup configuration files are not persistent across operations that require restarting the system
- Appendix A. Configuring NetBackup optimized duplication
- Appendix B. Disaster recovery terminologies
- Appendix C. Configuring Auto Image Replication
Handling split-brain scenario in NetBackup Flex Scale
A split-brain occurs when the cluster membership view differs among the cluster nodes, increasing the chance of data corruption. With majority-based I/O fencing, the potential for data corruption is eliminated as it provides a reliable arbitration mechanism which does not require any extra hardware. In a split-brain scenario, arbitration is done based on `majority` number of nodes among the sub-clusters. The node with the lowest node ID in the cluster is called the leader node and it a role in case of a tie.
Deciding cluster majority for majority-based I/O fencing mechanism:
If N is defined as the total number of nodes in the cluster, then majority is equal to N/2 + 1.
If there are even number of cluster nodes and both the sub-clusters have N/2 number of nodes, the partition with the leader node is treated as majority and that partition survives.
How majority-based I/O fencing works
An algorithm is used to decide the winner sub-cluster in the following way:
The node with the lowest node ID in the current cluster membership is designated as the leader node in the fencing race.
When a network partition occurs, each racer sub-cluster computes the number of nodes in its partition and compares it with the majority value.
If a racer finds that its partition does not have majority, it sends a LOST_RACE message to all the nodes in its partition including itself and all the nodes panic.
If the racer finds that it does have majority, it sends a WON_RACE message to all the nodes. Thus, the partition with majority nodes survives.