NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Size guidance for the NetBackup primary server and domain
- Factors that limit job scheduling
- More than one backup job per second
- Stagger the submission of jobs for better load distribution
- NetBackup job delays
- Selection of storage units: performance considerations
- About file system capacity and NetBackup performance
- About the primary server NetBackup catalog
- Guidelines for managing the primary server NetBackup catalog
- Adjusting the batch size for sending metadata to the NetBackup catalog
- Methods for managing the catalog size
- Performance guidelines for NetBackup policies
- Legacy error log fields
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- Data segmentation
- Fingerprint lookup for deduplication
- Predictive and sampling cache scheme
- Data store
- Space reclamation
- System resource usage and tuning considerations
- Memory considerations
- I/O considerations
- Network considerations
- CPU considerations
- OS tuning considerations
- MSDP tuning considerations
- MSDP sizing considerations
- Cloud tier sizing and performance
- Accelerator performance considerations
- Media configuration guidelines
- About dedicated versus shared backup environments
- Suggestions for NetBackup media pools
- Disk versus tape: performance considerations
- NetBackup media not available
- About the threshold for media errors
- Adjusting the media_error_threshold
- About tape I/O error handling
- About NetBackup media manager tape drive selection
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup SAN Client
- Best practices: NetBackup AdvancedDisk
- Best practices: Disk pool configuration - setting concurrent jobs and maximum I/O streams
- Best practices: About disk staging and NetBackup performance
- Best practices: Supported tape drive technologies for NetBackup
- Best practices: NetBackup tape drive cleaning
- Best practices: NetBackup data recovery methods
- Best practices: Suggestions for disaster recovery planning
- Best practices: NetBackup naming conventions
- Best practices: NetBackup duplication
- Best practices: NetBackup deduplication
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Best practices: NetBackup NAS-Data-Protection (D-NAS)
- Best practices: NetBackup for Nutanix AHV
- Best practices: NetBackup Sybase database
- Best practices: Avoiding media server resource bottlenecks with Oracle VLDB backups
- Best practices: Avoiding media server resource bottlenecks with MSDPLB+ prefix policy
- Best practices: Cloud deployment considerations
- Measuring Performance
- Measuring NetBackup performance: overview
- How to control system variables for consistent testing conditions
- Running a performance test without interference from other jobs
- About evaluating NetBackup performance
- Evaluating NetBackup performance through the Activity Monitor
- Evaluating NetBackup performance through the All Log Entries report
- Table of NetBackup All Log Entries report
- Evaluating system components
- About measuring performance independent of tape or disk output
- Measuring performance with bpbkar
- Bypassing disk performance with the SKIP_DISK_WRITES touch file
- Measuring performance with the GEN_DATA directive (Linux/UNIX)
- Monitoring Linux/UNIX CPU load
- Monitoring Linux/UNIX memory use
- Monitoring Linux/UNIX disk load
- Monitoring Linux/UNIX network traffic
- Monitoring Linux/Unix system resource usage with dstat
- About the Windows Performance Monitor
- Monitoring Windows CPU load
- Monitoring Windows memory use
- Monitoring Windows disk load
- Increasing disk performance
- Tuning the NetBackup data transfer path
- About the NetBackup data transfer path
- About tuning the data transfer path
- Tuning suggestions for the NetBackup data transfer path
- NetBackup client performance in the data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- Default number of shared data buffers
- Default size of shared data buffers
- Amount of shared memory required by NetBackup
- How to change the number of shared data buffers
- Notes on number data buffers files
- How to change the size of shared data buffers
- Notes on size data buffer files
- Size values for shared data buffers
- Note on shared memory and NetBackup for NDMP
- Recommended shared memory settings
- Recommended number of data buffers for SAN Client and FT media server
- Testing changes made to shared memory
- About NetBackup wait and delay counters
- Changing parent and child delay values for NetBackup
- About the communication between NetBackup client and media server
- Processes used in NetBackup client-server communication
- Roles of processes during backup and restore
- Finding wait and delay counter values
- Note on log file creation
- About tunable parameters reported in the bptm log
- Example of using wait and delay counter values
- Issues uncovered by wait and delay counter values
- Estimating the effect of multiple copies on backup performance
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- NetBackup storage device performance in the data transfer path
- Tuning other NetBackup components
- When to use multiplexing and multiple data streams
- Effects of multiplexing and multistreaming on backup and restore
- How to improve NetBackup resource allocation
- Encryption and NetBackup performance
- Compression and NetBackup performance
- How to enable NetBackup compression
- Effect of encryption plus compression on NetBackup performance
- Information on NetBackup Java performance improvements
- Information on NetBackup Vault
- Fast recovery with Bare Metal Restore
- How to improve performance when backing up many small files
- How to improve FlashBackup performance
- Veritas NetBackup OpsCenter
- Tuning disk I/O performance
Data types and deduplication
Different data types deduplicate at different rates. MSDP performs both deduplication and compression of data. Deduplication is performed first and then the resulting data segments are compressed before they are written to disk.
It is important to understand the different types of unstructured data in the environment for sizing. Some data types will not deduplicate well:
Encrypted files:
Encrypted files will not deduplicate well, and even small changes will often change the entire file resulting in higher change rates than non-encrypted files. There will generally only be small (<10% at best) storage savings from compression. There will be no deduplication between files, which will lower deduplication rates.
Compressed, image, audio, and video files:
Files that fall into this category will not deduplicate well, and there will be no savings from compression.
Note that encryption and compression at the file system level such as with NTFS is transparent to NetBackup, as the files are uncompressed and decrypted by the operating system when they are read. This may result in backups appearing larger in FETB than the data consumed on the file system. These file systems will see good deduplication and compression rates when the data is written to MSDP however.
Database deduplication will generally be lower than that observed for unstructured data. To achieve optimal deduplication, compression and encryption should not be enabled in the backup stream (for example, with RMAN directives for Oracle).
Database transaction logs will not deduplicate well due to the nature of the data, although savings from compression may be observed. it is important to determine deduplication rates for database backups and transaction log backups separately.
Transparent database encryption options will lower deduplication and compression rates. Initial backups will show minimal space savings. The level of deduplication achieved between backups depends on the nature of the changes to the database. In general, OLTP databases that may have changes distributed throughout the database will show lower deduplication rates than OLAP instances which tend to have more inserts than updates.
The notes above for unstructured data apply to NDMP backups. In addition, the nature of NDMP can affect deduplication rates. NDMP defines the communication protocol between filers and backup targets. It does not define the data format. Veritas has developed stream handlers for several filers (NetApp and Dell EMC PowerScale) that allow an understanding of the data streams. Filers without a stream handler may show very low deduplication rates (for example, 20% or lower). In these cases, MSDP Variable Length Deduplication (VLD) should be enabled on the MSDP policies, and a significant increase in deduplication rates will generally be observed.
For virtualization workloads, supported file systems and volume managers should be used so that NetBackup can understand the structure of the data. On configurations that meet these requirements, the deduplication engine will respect file boundaries when segmenting the data stream and significant increases in deduplication rates will be observed.
Due to the wide variations in customer environments, even within specific workloads, Veritas does not publish expected deduplication rates.
It is recommended that customers perform test in their own environments with a representative subset of data to be protected to determine the actual deduplication rates for the schedule types to be implemented:
Initial Full
Daily Differential
Subsequent Full
Database Transaction Log
Deduplication rates can be found in the Activity Monitor in the column. When viewing the job details, there is also an entry for deduplication rates:
Oct 8, 2021 12:22:20 AM - Info media-server.example.com (pid=29340) StorageServer=PureDisk:mediaserver.example.com; Report=PDDO Stats (multi-threaded stream used) for (mediaserver.example.com): scanned: 1447258 KB, CR sent: 6682 KB, CR sent over FC: 0 KB, dedup: 99.5%, cache hits: 11263 (99.2%), where dedup space saving:99.2%, compression space saving:0.3%
In this example, the deduplication rate that will be used for calculations is the total rate of 99.5%, which includes savings from compression.
Tests should be run over a period of weeks to capture typical change rates in the environment.