NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Size guidance for the NetBackup primary server and domain
- Factors that limit job scheduling
- More than one backup job per second
- Stagger the submission of jobs for better load distribution
- NetBackup job delays
- Selection of storage units: performance considerations
- About file system capacity and NetBackup performance
- About the primary server NetBackup catalog
- Guidelines for managing the primary server NetBackup catalog
- Adjusting the batch size for sending metadata to the NetBackup catalog
- Methods for managing the catalog size
- Performance guidelines for NetBackup policies
- Legacy error log fields
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- Data segmentation
- Fingerprint lookup for deduplication
- Predictive and sampling cache scheme
- Data store
- Space reclamation
- System resource usage and tuning considerations
- Memory considerations
- I/O considerations
- Network considerations
- CPU considerations
- OS tuning considerations
- MSDP tuning considerations
- MSDP sizing considerations
- Cloud tier sizing and performance
- Accelerator performance considerations
- Media configuration guidelines
- About dedicated versus shared backup environments
- Suggestions for NetBackup media pools
- Disk versus tape: performance considerations
- NetBackup media not available
- About the threshold for media errors
- Adjusting the media_error_threshold
- About tape I/O error handling
- About NetBackup media manager tape drive selection
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup SAN Client
- Best practices: NetBackup AdvancedDisk
- Best practices: Disk pool configuration - setting concurrent jobs and maximum I/O streams
- Best practices: About disk staging and NetBackup performance
- Best practices: Supported tape drive technologies for NetBackup
- Best practices: NetBackup tape drive cleaning
- Best practices: NetBackup data recovery methods
- Best practices: Suggestions for disaster recovery planning
- Best practices: NetBackup naming conventions
- Best practices: NetBackup duplication
- Best practices: NetBackup deduplication
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Best practices: NetBackup NAS-Data-Protection (D-NAS)
- Best practices: NetBackup for Nutanix AHV
- Best practices: NetBackup Sybase database
- Best practices: Avoiding media server resource bottlenecks with Oracle VLDB backups
- Best practices: Avoiding media server resource bottlenecks with MSDPLB+ prefix policy
- Best practices: Cloud deployment considerations
- Measuring Performance
- Measuring NetBackup performance: overview
- How to control system variables for consistent testing conditions
- Running a performance test without interference from other jobs
- About evaluating NetBackup performance
- Evaluating NetBackup performance through the Activity Monitor
- Evaluating NetBackup performance through the All Log Entries report
- Table of NetBackup All Log Entries report
- Evaluating system components
- About measuring performance independent of tape or disk output
- Measuring performance with bpbkar
- Bypassing disk performance with the SKIP_DISK_WRITES touch file
- Measuring performance with the GEN_DATA directive (Linux/UNIX)
- Monitoring Linux/UNIX CPU load
- Monitoring Linux/UNIX memory use
- Monitoring Linux/UNIX disk load
- Monitoring Linux/UNIX network traffic
- Monitoring Linux/Unix system resource usage with dstat
- About the Windows Performance Monitor
- Monitoring Windows CPU load
- Monitoring Windows memory use
- Monitoring Windows disk load
- Increasing disk performance
- Tuning the NetBackup data transfer path
- About the NetBackup data transfer path
- About tuning the data transfer path
- Tuning suggestions for the NetBackup data transfer path
- NetBackup client performance in the data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- Default number of shared data buffers
- Default size of shared data buffers
- Amount of shared memory required by NetBackup
- How to change the number of shared data buffers
- Notes on number data buffers files
- How to change the size of shared data buffers
- Notes on size data buffer files
- Size values for shared data buffers
- Note on shared memory and NetBackup for NDMP
- Recommended shared memory settings
- Recommended number of data buffers for SAN Client and FT media server
- Testing changes made to shared memory
- About NetBackup wait and delay counters
- Changing parent and child delay values for NetBackup
- About the communication between NetBackup client and media server
- Processes used in NetBackup client-server communication
- Roles of processes during backup and restore
- Finding wait and delay counter values
- Note on log file creation
- About tunable parameters reported in the bptm log
- Example of using wait and delay counter values
- Issues uncovered by wait and delay counter values
- Estimating the effect of multiple copies on backup performance
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- NetBackup storage device performance in the data transfer path
- Tuning other NetBackup components
- When to use multiplexing and multiple data streams
- Effects of multiplexing and multistreaming on backup and restore
- How to improve NetBackup resource allocation
- Encryption and NetBackup performance
- Compression and NetBackup performance
- How to enable NetBackup compression
- Effect of encryption plus compression on NetBackup performance
- Information on NetBackup Java performance improvements
- Information on NetBackup Vault
- Fast recovery with Bare Metal Restore
- How to improve performance when backing up many small files
- How to improve FlashBackup performance
- Veritas NetBackup OpsCenter
- Tuning disk I/O performance
How to analyze your backup requirements
Many factors can influence your backup strategy. You should analyze these factors and then make backup decisions according to your site's priorities.
When you plan your installation's NetBackup capacity, ask yourself the following questions:
Table: Questions to ask as you plan NetBackup capacity
Questions | Actions and related considerations |
|---|---|
Which systems need to be backed up? | Identify all systems that need to be backed up. List each system separately so that you can identify any that require more resources to back up. Document which computers have disk drives or tape drives or libraries attached and write down the model type of each drive or library. Identify any applications on these computers that need to be backed up, such as Oracle, DB2, VMware, MySQL, or MS-Exchange. In addition, record each host name, operating system and version, database type and version, network technology (for example, 10 gigabits), and location. |
How much data is to be backed up? | Calculate how much data you need to back up and the daily/weekly/monthly/yearly change rate. The change rates affect the deduplication ratio and therefore the amount of data that need to be written to the disk group. Include the total disk space on each individual system, including that for databases. Remember to add the space on mirrored disks only once. By calculating the total size of all clients, you can design a system that takes future growth into account. Try to estimate how much data you will need to back up in 6 months to a few years from now. Consider the following:
|
Should Accelerator be enabled for VMware virtual machine backup? | In subsequent full backups and incremental backups, only changed data would be backed up when Accelerator is enabled in the NetBackup policy. This case would greatly shorten the backup time and reduce the backup size. It's strongly recommended enabling Accelerator for VMware VMs backup.
|
What types of backups are needed and how often should they take place? | The frequency of your backups has a direct effect on your:
To properly size your backup system, you must decide on the type and frequency of your backups. Will you perform daily incremental and weekly full backups? Monthly or bi-weekly full backups? |
How much time is available to run each backup? | What is the window of time that is available to complete each backup? The length of a window dictates several aspects of your backup strategy. For example, you may want a larger window of time to back up multiple, high-capacity servers. Or you may consider the use of advanced NetBackup features such as synthetic backups, a local snapshot method, or FlashBackup. |
Should the scheduling windows for backups overlap with those of duplication or replication jobs or should they be separated? | If the windows of backup and duplication or replication jobs (including Auto Image Replication (A.I.R.)) overlap, performance of these jobs would be affected. Carefully design the scheduling of backups and duplication or replication jobs to try to avoid overlapping. For more information, see the following documents: Auto Image Replication (A.I.R.) slow performance, particularly for small images: https://www.veritas.com/content/support/en_US/article.100045506 How to tune NetBackup Auto Image Replication (A.I.R.) operations for maximum performance: https://www.veritas.com/content/support/en_US/article.100046559 |
Is archiving to the cloud supported? | NetBackup supports various cloud archive technologies including AWS Glacier options, Snowball, and Snowball Edge, along with Microsoft Azure Archive. |
How long should backups be retained? | An important factor while you design your backup strategy is to consider your policy for backup expiration. The amount of time a backup is kept is also known as the retention period. A fairly common policy is to expire your incremental backups after one month and your full backups after 6 months. With this policy, you can restore any daily file change from the previous month and restore data from full backups for the previous 6 months. The length of the retention period depends on your own unique requirements and business needs, and perhaps regulatory requirements. However, the length of your retention period is directly proportional to the number of tapes you need and the size of your NetBackup image database. Your NetBackup image database keeps track of all the information on all your disk drives and tapes. The image database size is tightly tied in to your retention period and the frequency of your backups. |
If backups are sent off site, how long must they remain off site? | If you plan to send tapes off-site as a disaster recovery option, identify which tapes to send off site and how long they remain off-site. You might decide to duplicate all your full backups, or only a select few. You might also decide to duplicate certain systems and exclude others. As tapes are sent off site, you must buy new tapes to replace them until they are recycled back from off-site storage. If you forget this detail, you may run out of tapes when you most need them. |
What is your network technology? | If you plan to back up any system over a network, note the network types. The next section explains how to calculate the amount of data you can transfer over those networks in a given time. Based on the amount of data that you want to back up and the frequency of those backups, consider using 10-GB network interfaces, linking aggregation/teaming, or installing a private network for backups. |
What systems do you expect to add in the next 6 months? | Plan for future growth when you design your backup system. Analyze the potential growth of your system to ensure that your current backup solution can accommodate future requirements. Remember to add any resulting growth factor that you incur to your total backup solution. |
Will user-directed backups or restores be allowed? | Allow users to do their own backups and restores, to reduce the time it takes to initiate certain operations. However, user-directed operations can also result in higher support costs and the loss of some flexibility. User-directed operations can monopolize disk pools and tape drives when you most need them. They can also generate more support calls and training issues while the users become familiar with the new backup system. Decide whether user access to some of your backup systems' functions is worth the potential cost. |
What data types are involved? | What are the types of data: text, graphics, database, virtual machines? How compressible is the data? What is the typical deduplication rate of data to be backed up? How many files are involved? Will NetBackup's Accelerator feature be enabled for VMware virtual machine or NDMP backups? (Note that only changed data is backed up with Accelerator for both full and incremental backup.) Will the data be encrypted? (Note that encrypted backups may run slower.) |
Where is the data located? | Is the data local or remote? Is it tape, JBOD (just a bunch of disks), or disk array? What are the characteristics of the storage subsystem? What is the exact data path? How busy is the storage subsystem? |
How to test the backup system? | Because hardware and software infrastructure can change over time, create an independent test backup environment. This approach ensures that your production environment can work with the changed components. |