Best practices for NET_BUFFER_SZ and Buffer_size, why can't NetBackup change the TCP send/receive space

Article: 100016112
Last Published: 2023-08-15
Ratings: 0 0
Product(s): NetBackup & Alta Data Protection

Problem

The NET_BUFFER_SZ file is configured on a media server or UNIX/Linux client, but appears to be ignored.  

Similarly, the HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\NetBackup\CurrentVersion\Config\Buffer_Size registry key (Communication Buffer Size GUI setting) is configured for a Windows client, but appears to be ignored.

Historically, these settings have been used to adjust the TCP SO_SNDBUF and SO_RCVBUF on media server and client hosts.   Those adjustments allowed the sending TCP stack to accept additional outbound application data while waiting for TCP acknowledgements, and allowed the receiving TCP stack to buffer a larger amount of inbound data while either waiting to properly sequence missing frames or waiting for the receiving application to read the already sequenced data.  

Is there a reason these settings sometimes do not work or cause unexpected behaviors?

Error Message

A review of the debug logs, or the TCP windows size in the packets of a network trace, suggest the operating system (O/S) is ignoring the setsockopt API call to adjust the TCP memory for the socket.

The bptm debug log shows that NetBackup detected the new setting and called the setsockopt API to change the TCP send space to 256 KB, but the subsequent call to the getsockopt API indicates that the value remains unchanged.

<2> io_set_sendbuf: setting send network buffer to 262144 bytes
<2> io_set_sendbuf: send network buffer is 65536 bytes

 

The results are similar during the restore.

<2> io_set_recvbuf: setting receive network buffer to 262144 bytes
<2> io_set_recvbuf: receive network buffer is 65535 bytes

 

Similar log entries will appear in the Job Details at NB 7.1+ and also in the client side debug logs.

To see the debug log entries, turn up TCP logging on Windows clients and VERBOSE logging on media servers and UNIX/Linux clients.

Cause

A bit of history and evolution

These configurable settings were added to an early version of NetBackup because operating systems (O/S) at that time did not commonly allocate very much TCP memory.  The O/S was concerned with conserving an expensive and limited resource (RAM) and most network traffic was small bursts of data compared to the huge amount of bandwidth consumed by a full system backup.  It was useful to have NetBackup request the larger amount of TCP memory to smooth out the data flow and provide better performance if a connection had significant latency or dropped frames frequently [requiring a wait for retransmission].

In addition, the early versions of NetBackup used what is now called legacy callback from the client to bptm, this allowed bptm to adjust the TCP memory before the socket was used.

Since those days, networking and NetBackup have evolved significantly.  Network stacks now make use of complex algorithms to shape data transmission based on real time network latency, to smoothly and more accurately recover when frames are lost, and dynamically adjust the TCP memory based on overall real time system load. 

Similarly, to accommodate firewalls, NetBackup has new processes (PBX and vnetd) which listen on the behalf of other processes and exchange a small amount of protocol over the connection before it is transferred to the end processes; bptm and the client processes specifically.   

The resulting potential conflicts

Some newer operating system versions (Linux especially) do not allow TCP memory to be adjusted by the application once a connection has been established.  This negates the efforts of bptm to adjust the TCP memory.  See the TCP(7) man page for details.

Some operating systems (Linux and Windows especially) disable memory auto tuning if an application manually adjust the TCP memory thus preventing the system from automatically making the data transfer even more efficient when additional memory is available. 

Network algorithms are increasingly complex and some driver versions are not well written and do not react well when applications requested to resize the TCP memory. 

The results can be unexpected and/or undesirable.  In some cases the operating system 

  • silently ignores the setsockopt request (as in the examples above).
  • tries to adjust the memory but under-performs thereafter (see Related Articles).
  • tries to adjust the memory but has timing and/or data tracking problems and drops the connection after some time (see Related Articles).

 

Solution

A modern, well configured, operating system with properly written TCP drivers is unlikely to need TCP memory tuning by NetBackup.   Accordingly the best NetBackup configuration is to disable tuning by placing a zero (0) into the NET_BUFFER_SZ file on media servers and UNIX/Linux clients.   Simply deleting the file is not equivalent because some NetBackup process have default setsockopt API calls configured to overcome past external problems with various platforms and drivers.

$ echo '0' > /usr/openv/netbackup/NET_BUFFER_SZ

On Windows clients, the same effect can sometimes be obtained by placing a 0 into this registry key .  But in some cases, explicit values may yield better performance.

HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\NetBackup\CurrentVersion\Config\Buffer_Size

Note: The registry change may be lost or reverted if other Client Settings are later changed via the Client Host Properties GUI.  See the related articles.


To determine if manual tuning can provide better performance than auto-tuning; record the current O/S settings, then temporarily adjust the TCP memory allocations upwards in increments of 65536 or 131072 bytes and observe if the changes are beneficial.  Also ensure that TCP window scaling is enabled.

For AIX:

$ no -o sb_max[=<newGlobalMax>]
$ no -o tcp_sendspace[=<newValue>]
$ no -o tcp_recvspace[=<newValue>]
$ no -o rfc1323[=1]

 

 

For HP-UX:   (window scaling will occur if > 64 KB)

$ ndd -get /dev/tcp tcp_xmit_hiwater_max (global max)
$ ndd -get /dev/tcp tcp_recv_hiwater_max (global max)
$ ndd -get /dev/tcp tcp_xmit_hiwater_def
$ ndd -get /dev/tcp tcp_recv_hiwater_def
$ ndd -set /dev/tcp <keyword> <newValue>

 

For Linux:

$ sysctl net.core.wmem_max (global max)
$ sysctl net.core.rmem_max (global max)
$ sysctl net.ipv4.tcp_wmem
$ sysctl net.ipv4.tcp_rmem
$ sysctl net.ipv4.tcp_window_scaling (1 = on)
$ sysctl net.ipv4.tcp_adv_win_scale (2 = on)
$ sysctl net.ipv4.tcp_moderate_rcvbuf (1 = on)
$ sysctl -w <keyword>=<newValue>

 

For Solaris:

$ ndd -get /dev/tcp tcp_max_buf (global max)
$ ndd -get /dev/tcp tcp_xmit_hiwat
$ ndd -get /dev/tcp tcp_recv_hiwat
$ ndd -get /dev/tcp tcp_wscale_always (1 = on)
$ ndd -set /dev/tcp <keyword> <newValue>

 

If the manual tuning provides an increase in performance, the new settings can typically be left in place on standalone media servers because typically only a few hundred connections exist at any given point in time.   However, master servers, SAN media servers, and application client hosts may find this tuning non-optimal if there are many hundreds of concurrent connections that do not need high bandwidth [and thus the additional memory].   If so, change the O/S settings back to the original values and then set the NET_BUFFER_SZ contents (in bytes) or the Communication Buffer Size (in KB) to an equivalent value.   If performance degrades or connections begin to drop then there is a problem with one of the TCP stacks and the appropriate vendor(s) should be engaged for resolution if the manual O/S tuning cannot be put back in place.  

If retaining the new O/S settings, be sure to make them persist across reboots.   See the O/S vendor documentation for guidance.  

Note 1: In rare situations

It can sometimes be beneficial to adjust NET_BUFFER_SZ or Buffer_size to work around problems with the TCP stack.   TCP drivers are complex and occasionally having NetBackup call setsockopt changes their behavior from less desirable to more desirable, however this is unpredictable.

Note 2: NetBackup for Windows Java GUI resetting Buffer_size

Older versions of the NetBackup Java Administration Console on Windows do not recognize 0 as a valid value for Buffer_size.  If changes are made to values on the same Host Properties screen, saving those changes will inadvertently change 0 to the prior default value for that version of NetBackup.  This is corrected in NetBackup versions 7.6.0.3 and 7.6.1.

Note 3: For clients running NetBackup 7.7.2 - 8.0 (8.1 on Windows)

An Emergency Engineering Binary (EEB) is needed to enable the desired behavior discussed above.  In the absence of the EEB, manual tuning of NET_BUFFER_SZ or Buffer_size may be necessary.  Please contact NetBackup Technical Services and reference E-Track 3914429 (for UNIX restores) or E-Track 3925723 (for Windows backups and/or restores).

Common Myths  

There are a number of common myths regarding NetBackup buffer tuning.   Some were true when specific TCP drivers were in place on one or both hosts, or to overcome specific external problems.   But none are true in the general and universal case.  

Myth 1: NET_BUFFER_SZ affects the size of the data block that NetBackup fills and provides to the TCP stack.  

Not true, but see Note.  

  • The NET_BUFFER_SZ and Buffer_size are only used by NetBackup as values when calling the setsockopt API to adjust the TCP memory for the connection.   It does not determine how much data NetBackup writes or reads at once.
  • bpbkar reads data from the file system and writes to the connection 256 KB or 512 KB at a time (depending on version/platform/etc), dbclient may use 64 KB or 256 KB depending on database type and database configuration.  
  • tar provides a 256 KB or 512 KB buffer (depending on version/platform/etc) for the TCP stack to fill, but only receives as much data as the TCP stack has sequenced, which is immediately written to the file system while the networking layers buffer and sequence additional inbound data.
  • bptm/bpdm requests the O/S to fill or empty the next data buffer which is sized per the default or configured SIZE_DATA_BUFFERS setting.  

Note: If the TCP memory is larger than the amount of data that NetBackup writes, then the TCP stack can ideally accept and buffer the entire amount.   This allows the write or send API calls to return control to NetBackup which can begin to gather additional data while TCP transmits the initial data, emptying the TCP memory.   Otherwise, the write or send call blocks waiting for the network to transmit [and receive acknowledgement for] the initial portion of the data until the remainder will fit in the TCP send space.  

Myth 2: Socket memory tuning is needed for efficient restore of large images.  

Potentially true, but a bit out of context.

  • Large images are not transferred by TCP any less efficiently than small images (at least those over a few MB).   However, if there is X% inefficient, it will be much more noticeable for a large image than for a small one.
  • Having an SO_SNDBUF setting on the media server which is larger than the default or configured SIZE_DATA_BUFFERS allows the sending TCP stack to take an entire data buffer from bptm at once so that the buffer can be refilled from the storage device.
  • Having an SO_RCVBUF setting on the client which is larger than the amount of data that the client writes to the file system allows an entire application buffer of data to be received and sequenced by TCP while waiting for NetBackup to write the prior buffer to the file system.   If file system I/O is bursty, this can be of significant benefit.   See Myth 1 for additional details.  

Myth 3: The TCP memory needs to be configured the same on both the media server and client hosts.     

Not true, but similar sizes can be useful.  

  • The data to be transferred is broken into smaller, typically 1460 byte ethernet frames, for transmission and acknowledgement.   The 'buffer' of application data is not transferred enmass as a unit.
  • The receiving TCP stack will fill the provided bptm data buffer as data is received and properly sequenced, but multiple network paths and dropped frames will make the delivery bursty.
  • The bptm data buffer size should be optimized for the storage device, not the network.
  • Ever notice that all the O/S vendors use different default TCP memory sizes, often with differences between read and write?   If there was a requirement for consistency not only would the O/S read and write be the same, but the vendors would coordinate to use similar defaults and document the benefits.   TCP is dynamic enough to handle any layer 2 framing method or other variation in the network transport so this is not a requirement.
  • Each O/S vendor partitions the TCP SO_RCVBUF differently between sequenced data awaiting read by the application and out of sequence data waiting missing frames, so even if NetBackup requested the same amount on all platforms the results would be varied. (Linux always allocates twice the setsockopt request.)
  • But there is benefit to some coordination.  Increasing the SO_RCVBUF affects the size of the TCP window advertised by the receiver.  But if the SO_SNDBUF on the sender is smaller, it can't use the full window because it can't discard data, that might need to be retransmitted, until it is acknowledged.

Myth 4: 32 KB is a great size for Windows clients, 64 (or 256) KB is a great size for UNIX hosts.     

A decade ago there was some truth to this on LAN segments when 10/100 Mb were the upper limits.   But with modern 1 Gb and 10 Gb networks, WAN latency, improvements to TCP protocols, and cheap and abundant RAM those low limits seriously impede many current backup operations.  Most modern operating systems commonly use 1 - 4 MB as the default sizes.  

 

Formal Resolution      (with respect to NetBackup changing when the setsockopt call occurs)  

Veritas Corporation has acknowledged that the above-mentioned issue is present in the current version(s) of the product(s) mentioned at the end of this article.    Veritas is committed to product quality and satisfied customers. 

This issue is currently being considered by Veritas to be addressed in the next major revision of the product.   There are no plans to address this issue by way of a patch or hotfix in the current or previous version of the software at the present time.   Please note that Veritas reserves the right to remove any fix from the targeted release if it does not pass quality assurance tests, or introduces new risks to the overall code stability.   Veritas's plans are subject to change and any action taken by you based on the above information, or your reliance upon the above information is made at your own risk.  

Please be sure to refer back to this document periodically, as any changes to the status of the issue will be reflected here.

Applies To

NetBackup 3.x - 10.2
 

References

Etrack : 3427838 Etrack : 3925723 Etrack : 539793 Etrack : 3914429

Was this content helpful?