Veritas NetBackup™ Cloud Administrator's Guide
- About NetBackup cloud storage
- About the cloud storage
- About the Amazon S3 cloud storage API type
- About protecting data in Amazon for long-term retention
- Protecting data using Amazon's cloud tiering
- About Microsoft Azure cloud storage API type
- About OpenStack Swift cloud storage API type
- Configuring cloud storage in NetBackup
- Scalable Storage properties
- Cloud Storage properties
- About the NetBackup CloudStore Service Container
- About the NetBackup media servers for cloud storage
- Configuring a storage server for cloud storage
- NetBackup cloud storage server properties
- Configuring a storage unit for cloud storage
- Changing cloud storage disk pool properties
- Monitoring and Reporting
- Operational notes
- Troubleshooting
- About unified logging
- About legacy logging
- Troubleshooting cloud storage configuration issues
- Troubleshooting cloud storage operational issues
About object size for cloud storage
The performance of NetBackup in cloud is driven by the combination of object size, number of parallel connections, and the read or write buffer size. The following diagram illustrates how these factors are related:
The following diagram illustrates how these factors are related:
The parameters are described as follows:
Object Size: The backup data stream is divided into fixed size chunks. These chunks are stored as objects in the cloud object storage. The backup related metadata gets written in variable sizes.
Read or write buffer size: You can configure the read or write buffer size to tune the performance of the backup and restore operations.
Note:
If you increase the read or write buffer size, the number of parallel connections increase. Similarly, if you want lesser number of parallel connections, you can reduce the read or write buffer size. However, you must consider the network bandwidth and the system memory availability.
Parallel connections (Derived): To enhance the performance of backup and restore operations, NetBackup uses multiple parallel connections into the cloud storage. The performance of NetBackup depends on the number of parallel connections.
Number of parallel connections is derived from the read or write buffer size and the object size.
Number of Parallel Connections = Read or Write Buffer Size / Object Size
Consider the following factors when deciding the number of parallel connections:
Maximum number of parallel connections permitted by the cloud storage provider.
Network bandwidth availability between NetBackup and the cloud storage environment.
System memory availability on the NetBackup host.
The default settings are as follows:
Table: Current default settings
Cloud storage provider | CloudCatalyst storage | Classic Cloud storage | ||
---|---|---|---|---|
Object size | Default read or write buffer size | Object size | Default read or write buffer size | |
Amazon S3/Amazon GovCloud | 64 MB (fixed) | 64 MB (fixed) | 16 MB (fixed) | 400 MB (configurable between 16 MB to 1 GB) |
Azure | 64 MB (fixed) | 64 MB (fixed) | 4 MB (fixed) | 400 MB (configurable between 4 MB to 1 GB) |
In case of temporary failures on network with data transfer, NetBackup performs multiple retries for transferring the failed objects. In such case, if the failures persist, the complete object is transferred again. Also, with higher latency and higher packet loss, the performance might reduce. To handle the latency and packet loss issues, increasing the number of parallel connections can be helpful.
NetBackup has some timeouts on the client side. If the upload operation takes more time (due to big object size) than the minimum derived NetBackup data transfer rate, there can be failures with NetBackup.
Consider the following for legacy environments without deduplication support: While restoring from back-level images (8.0 and earlier), where the object size is 1MB, the buffer of 16 MB (for one connection) is not completely utilized while also consuming memory. With the increased object size, there is a restriction on number of connections due to the available memory.
If the number of connections are less, parallel downloads would be less compared to older number of connections