How to configure buffers for NetBackup in a Windows environment to improve performance

Article: 100030830
Last Published: 2015-10-15
Ratings: 4 2
Product(s): NetBackup & Alta Data Protection

Problem

DOCUMENTATION: How to configure buffers for NetBackup in a Windows environments to improve performance

Solution

For most configurations, the default NetBackup buffer settings are correct and there is no need to adjust them for the purpose of performance.  Furthermore, there are factors outside of NetBackup which affect performance and should be reviewed.  Some of these external factors include Host Bus Adapter (HBA) cards, SCSI cards, network interface card (NIC) settings, client disk I/O speed, network latency, and tape drive I/O.  All of these should be reviewed to determine their respective impact on backup and restore speeds prior to any attempts to tune NetBackup.  

On a Windows host, four different buffer settings can be modified to enhance backup performance.  Those settings are:

·                NUMBER_DATA_BUFFERS: This media server settings affects the number of data buffers used by NetBackup to buffer data prior to sending it to the attached storage unit (tape or disk).  The default value is 16.
·                SIZE_DATA_BUFFERS: This media server setting affects the size of each data buffer and thus the size of the I/O bock sent to the storage device.  The default value is 65536.
·                NET_BUFFER_SZ: This setting applies to media servers [and UNIX clients].  It may affect the TCP send and receive space which is used to store data being transmitted between the two hosts. The default value varies depending on the process and platform but is often 256KB.  
·                Buffer_size: The TCP send and receive space setting for Windows clients.  The default value is 32KB.

Overview:
When a backup is initiated, the NetBackup client reads data from the file system into an internal buffer, the size of which is not tunable.  When full, the internal buffer is presented to the operating system (OS) for transmission by TCP.  The client OS will accept as much of the buffer as possible into the TCP send space.  If it cannot accept the entire buffer, the system call will block and the NetBackup client will be idle until the call completes.  The TCP send space holds the outbound data that is either pending transmission or waiting for TCP acknowledgement from the remote host (in case it needs to be retransmitted).  On the media server host, the TCP stack receives the inbound segments from the network and places them into the TCP receive space where they are sequenced for delivery to the application.  The data will remain in the receive space until any dropped or out of sequence packets are received, and then until space is available in a data buffer.  As soon as a data buffer is full, assuming the storage device is ready to write, the information is written to the storage unit.  Because there are multiple data buffers, the inbound data can be accepted from the network layer even when the storage unit pauses momentarily, such as to reposition the write head.
Note: If the media server host is also the client host, then the NetBackup client process reads the file system information directly into the next available data buffer, bypassing the need for the data to pass through the TCP send space on the client host and the TCP receive space on the media server host.  
Note: For a restore, the direction of flow is reversed.  The backup image is read from the storage unit device into an available (empty) data buffer.  When the data buffer is full an operating system call is made to pass the contents to the TCP send space; the call blocks if there is not enough space available.  The data is then transmitted onto the network and not cleared from the send space until a TCP acknowledgement is received.  Upon reaching the client host, the data is placed temporarily into the TCP receive space until the tar program can write it back onto the file system. 
 
Waits versus Delays:
When NetBackup is ready to do something but a buffer is not ready, that is a 'wait' and a 'delay'.  NetBackup will then pause for a few milliseconds and then check again, if a buffer is still not ready that is an additional 'delay'.  Hence the delay count will always be greater than or equal to the wait count.
Debug logging:
NetBackup writes debug log entries for the number of times it had to wait for full or empty data buffers.
·          For local backup in which the media server is also the client;  examine the bptm and bpbkar debug logs.
·          For remote client backup in which the client is not the media server; examine the bptm log, it will contain entries from both the bptm parent process managing the storage unit side of the data buffers and also the bptm child processes transferring data between the network layers and the data buffers.
 
The “ waited for empty” and “ waited for full” messages pertain to the data buffers in shared memory, they are not for network buffers.

To troubleshoot performance issues related to data buffer settings, enable and review the bptm log on the media server and the bpbkar log on the client.  On the media server, go to the <INSTALL_PATH>\Veritas\NetBackup\Logs directory, and create a   bptm  folder.  On the client, go to the <INSTALL_PATH>\Veritas\NetBackup\Logs directory and create a bpbkar folder.  Then, when there are no backups or restores running, stop and restart all NetBackup services on the media server and client.

Examine the bptm and bpbkar logs for references to waits and delays and compare one side with the other.  Individually, each side's number of waits means little.  Only when compared to the opposite side, can you deduce where a potential bottleneck is:

Example 1:
bptm:
13:16:06.546 [2776.2620] <2> mpx_read_data: waited for empty buffer 0 times, delayed 0 times
 
bpbkar:
1:16:6.875 PM: [2532.1304] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 0 times for empty buffer, delayed 0 times
 
Conclusion : Neither the bptm or bpbkar processes are waiting on the other.  Increasing the number of data buffers is unlikely to improve performance because bpbkar is not waiting on bptm.  Changing the network buffer settings will have no effect because bpbkar is writing directly into the data buffers.  Changing the size of the data buffers may improve performance, but see the caution below.

Example 2:
bptm:
12:32:25.937 [2520.2176] <2> write_data: waited for full buffer 17285 times, delayed 18012 times
 
bpbkar:
12:32:44.875 PM: [1372.135] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 612 times for empty buffer, delayed 651 times
 
Conclusion : The bptm process is waiting to receive data from the client many thousands of times more than the client is waiting on the bptm process.  The bottleneck here is on the client.  Increasing SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS or network buffer tuning will not improve performance.  Finding out why the client is slow to pass data to the media server, is the key.  Investigate disk read performance.

Example 3:
bptm:
13:31:42.343 [1800.2108]  <2> write_data: waited for full buffer 1 times, delayed 5 times
 
bpbkar:
1:30:25.301 PM: [1242.916] <4> tar_backup::OVPC_EOFSharedMemory: INF - bpbkar waited 7420 times for empty buffer, delayed 26525 times
 
Conclusion : The quantity of waits listed in bpbkar relative to bptm indicates the problem is on the storage side.  The client is waiting to send data, until there is a place to put it.  This indicates the data is not passing to the storage unit devices fast enough.  Increasing the SIZE_DATA_BUFFERS or NUMBER_DATA_BUFFERS may help.  The key here is to figure out if the performance bottleneck is the tape drive write speed, or the HBA/SCSI transfer speed.
 
Example 4:
bptm parent:
00:03:06.641 [18645.112] <2> write_data: waited for full buffer 7529 times, delayed 137851 times
 
bptm child:
00:03:06.620 [18654] <2> fill_buffer: [18645] socket is closed, waited for empty buffer 0 times, delayed 0 times, read -689930240 bytes
 
Conclusion : The parent bptm process is waiting to receive data from the client many thousands of times more than the child bptm process which is receiving data from the remote client. Investigate the reason why the client or network is slow to provide the data.
 
Please note that the above examples have been extracted from logs running on systems prior to Netbackup 6.5. There was change in bpbkar32.exe at 6.5 where the code that logged the message in bpbkar32 in 6.0 is not present in 6.5 and later versions. To be able to see these messages, set BPBKAR_VERBOSE to 7 or higher using nbsetconfig. The following line is then reported in the bpbkar log on Windows:
 
12:12:02.572 [23112.22604] <2> BufferManagerLegacySharedMemory::~BufferManagerLegacySharedMemory(): DBG - bpbkar waited 0 times for empty buffer, delayed 0 times. (../BufferManagerLegacySharedMemory.cpp:111)
 
Setting verbose to 7 is not required for Unix clients because the  bpbkar code still writes the message to the bpbkar log.

NUMBER_DATA_BUFFERS
To change the NUMBER_DATA_BUFFERS, create the < INSTALL_PATH>\NetBackup\db\config\NUMBER_DATA_BUFFERS file.  It should contain the number of buffers to be created at the start of backup.  If the file is not present, the default of 16 will be used.  Ensure the file name is not created with a suffix such as '.txt'.

SIZE_DATA_BUFFERS
Remember this is the size of each data buffer setup on the media server, the number of which is defined by NUMBER_DATA_BUFFERS.  Exercise caution when changing the value from the default setting as some SCSI cards, HBA cards, and tape drives cannot transfer buffers greater than 65536 in size efficiently and correctly.  After changing this value, it is important to test both backups and restores, as sometimes data can be written at the modified size, but can not be read at the modified size.   Please review the specifications of the HBA card, SCSI card and tape drive to confirm that the vendor recommended values are not being exceeded.  
 
To change the SIZE_DATA_BUFFERS setting, create a file called SIZE_DATA_BUFFERS in the <INSTALL_PATH>\NetBackup\db\config.  Add the desired value in this file, expressed in bytes, in multiples of 1024. Ensure the file has no extensions after saving.  If the file is not present, the default for the SIZE_DATA_BUFFER is 65536.  

NET_BUFFER_SZ
Note the NET_BUFFER_SZ value is the size of the TCP send and receive space on the media server which receives data from the client server.  The NET_BUFFER_SZ is set on the media server by creating the NET_BUFFER_SZ file in the <INSTALL_PATH>\NetBackup directory.  If the file is created for tuning purposes, be sure the file name has no extensions such as '.txt'.  The recommended value is 0  to allow Windows to auto-tune.

Buffer_size
Please note the value of the Buffer_size does not show up in the bptm log file, but can be viewed and set from the master server or by modifying a registry value.  The default value for the Buffer_size setting is 32K.  
 
On the client server, the buffer size (Buffer_size) can be modified in two ways.  
 
1. From the NetBackup Administration console, Host Properties | Clients, double-click the client you are configuring.  Expand the options under Windows Client | select Client Settings.  Change the Communications buffer to 0 to allow Windows to auto-tune.
 
Figure 1
 
 
2. Edit the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\NetBackup\CurrentVersion\Config\Buffer_Size
 
Change the decimal value to 0 to allow Windows to auto-tune.
 
Note: See the related articles for best practices regarding NET_BUFFER_SZ and Buffer_size.

Viewing values of the buffer settings:
A review of the bptm log file will show the current settings for the NET_BUFFER_SZ, SIZE_DATA_BUFFERS, and NUMBER_DATA_BUFFERS   The following is an excerpt from the bptm log from a media server showing the value of these three settings. 
 
16:54:40 [284.2260] <2> io_set_recvbuf: setting receive network buffer to 32768 bytes
16:54:40 [284.2260] <2> io_set_recvbuf: receive network buffer is 32540 bytes
16:54:40 [284.2260] <2> io_init: using 32768 data buffer size
...
16:54:40 [284.2260] <2> io_init: using 16 data buffers
 
Note:  In the above example, the operating system did not honor the request to adjust the TCP receive space.  Prior to Windows 2008, applications are not allowed to adjust the TCP memory after a connection is established.  TCP Windows Auto-Tuning, introduced in Windows 2008 does allow for manual TCP memory tuning by the application, but it is generally better to change the NetBackup NET_BUFFER_SZ and/or Buffer_size to 0 and allow Windows to auto-tune.
 
Note: See the related article on best practices for NET_BUFFER_SZ and Buffer_size.
 
 

 

Was this content helpful?