Veritas NetBackup™ Flex Scale Administrator's Guide
- Product overview
- Viewing information about the NetBackup Flex Scale cluster environment
- NetBackup Flex Scale infrastructure management
- User management
- Directory services and certificate management
- Region settings management
- About NetBackup Flex Scale storage
- Node and disk management
- License management
- NetBackup Flex Scale network management
- About network management
- Modifying DNS settings
- About bonding Ethernet interfaces
- Bonding operations
- Data network configurations
- NetBackup Flex Scale infrastructure monitoring
- Resiliency in NetBackup Flex Scale
- EMS server configuration
- Site-based disaster recovery in NetBackup Flex Scale
- NetBackup Flex Scale security
- Troubleshooting
- Services management
- Collecting logs for cluster nodes
- Troubleshooting NetBackup Flex Scale issues
- If cluster configuration fails (for example because an IP address that was already in use is specified) and you try to reconfigure the cluster, the UI displays an error but the configuration process continues to run
- Validation error while adding VMware credentials to NetBackup
- NetBackup Web UI incorrectly displays some NetBackup Flex Scale processes as failed
- Unable to create BMR Shared Resource Tree (SRT) on NetBackup Flex Scale Appliance
- Appendix A. Configuring NetBackup optimized duplication
- Appendix B. Disaster recovery terminologies
NetBackup master service catalog protection using checkpoints
In NetBackup Flex Scale, you can protect your master service from software failures or from being corrupted using checkpoints. The checkpoints are created for the master service catalog file system every 2 hours according to a schedule. A maximum of 36 checkpoints can be created. Once the total number of checkpoints exceed 36, the oldest checkpoint is deleted and a new checkpoint is created.
A new file system with the name SCRATCH_<XXXXX> is created which acts as a scratch space for the recovery process. This file system is used for syncing and validating data from the checkpoint. The scratch file system is created during initial configuration. The scratch file system is erasure-coded with logging having the same layout (8:4) as the other data file systems. The file system is extended when the NetBackup master service storage grows (during the addition of the sixth node).
All the checkpoints consume storage from the same volume set. As the checkpoints are copy-on-write, only modified data gets pushed to the checkpoints. But depending on data change rate, the checkpoints can consume considerable storage. If the primary file set cannot allocate storage during the write or file creation operation, instead of returning an ENOSPC error to the user, the oldest checkpoint is automatically deleted to make space. An email notification is sent if checkpoints are deleted to make free space.
All other operations such as adding a node and replacing a node are blocked when catalog restore is in progress.
Note:
See Performing a recovery of the catalog file system using REST APIs.
Note:
When the master service catalog file system is recovered using an old checkpoint, the master service goes back to the point in time of the checkpoint. The images created after that point in time will remain in the system and cannot be accessed unless they are reimported.
For more details on import, see the Importing backup images, Phase I and Importing backup images, Phase II sections in the NetBackup Administrator's Guide, Volume I.