Veritas NetBackup™ Cloud Administrator's Guide
- About NetBackup cloud storage
- About the cloud storage
- About the cloud storage vendors for NetBackup
- About the Amazon S3 cloud storage API type
- Amazon S3 cloud storage vendors certified for NetBackup
- Amazon S3 storage type requirements
- Permissions required for Amazon S3 cloud provider user
- Amazon S3 cloud storage provider options
- Amazon S3 cloud storage options
- Amazon S3 advanced server configuration options
- Amazon S3 credentials broker details
- About private clouds from Amazon S3-compatible cloud providers
- About Amazon S3 storage classes
- Amazon virtual private cloud support with NetBackup
- About protecting data in Amazon for long-term retention
- Protecting data using Amazon's cloud tiering
- About using Amazon IAM roles with NetBackup
- About NetBackup character restrictions for Amazon S3 cloud connector
- Protecting data with Amazon Snowball and Amazon Snowball Edge
- Configuring NetBackup for Amazon Snowball with Amazon Snowball client
- Configuring NetBackup for Amazon Snowball with Amazon S3 API interface
- Using multiple Amazon S3 adapters
- Configuring NetBackup with Amazon Snowball Edge with file interface
- Configuring NetBackup for Amazon Snowball Edge with S3 API interface
- Configuring NetBackup for Amazon Snowball and Amazon Snowball Edge for NetBackup CloudCatalyst Appliance
- Configuring SSL for Amazon Snowball and Amazon Snowball Edge
- Post backup procedures if you have used S3 API interface
- About Microsoft Azure cloud storage API type
- About OpenStack Swift cloud storage API type
- Configuring cloud storage in NetBackup
- Before you begin to configure cloud storage in NetBackup
- Configuring cloud storage in NetBackup
- Cloud installation requirements
- Scalable Storage properties
- Cloud Storage properties
- About the NetBackup CloudStore Service Container
- Deploying host name-based certificates
- Deploying host ID-based certificates
- About data compression for cloud backups
- About data encryption for cloud storage
- About key management for encryption of NetBackup cloud storage
- About cloud storage servers
- About object size for cloud storage
- About the NetBackup media servers for cloud storage
- Configuring a storage server for cloud storage
- Changing cloud storage server properties
- NetBackup cloud storage server properties
- About cloud storage disk pools
- Configuring a disk pool for cloud storage
- Saving a record of the KMS key names for NetBackup cloud storage encryption
- Adding backup media servers to your cloud environment
- Configuring a storage unit for cloud storage
- About NetBackup Accelerator and NetBackup Optimized Synthetic backups
- Enabling NetBackup Accelerator with cloud storage
- Enabling optimized synthetic backups with cloud storage
- Creating a backup policy
- Changing cloud storage disk pool properties
- Certificate validation against Certificate Revocation List (CRL)
- Managing Certification Authorities (CA) for NetBackup Cloud
- Monitoring and Reporting
- Operational notes
- Troubleshooting
- About unified logging
- About legacy logging
- NetBackup cloud storage log files
- Enable libcurl logging
- NetBackup Administration Console fails to open
- Troubleshooting cloud storage configuration issues
- NetBackup Scalable Storage host properties unavailable
- Connection to the NetBackup CloudStore Service Container fails
- Cannot create a cloud storage disk pool
- Cannot create a cloud storage
- Data transfer to cloud storage server fails in the SSL mode
- Amazon GovCloud cloud storage configuration fails in non-SSL mode
- Data restore from the Google Nearline storage class may fail
- Backups may fail for cloud storage configurations with Frankfurt region
- Backups may fail for cloud storage configurations with the cloud compression option
- Fetching storage regions fails with authentication version V2
- Troubleshooting cloud storage operational issues
- Cloud storage backups fail
- Stopping and starting the NetBackup CloudStore Service Container
- A restart of the nbcssc (on legacy media servers), nbwmc, and nbsl processes reverts all cloudstore.conf settings
- NetBackup CloudStore Service Container startup and shutdown troubleshooting
- bptm process takes time to terminate after cancelling GLACIER restore job
- Handling image cleanup failures for Amazon Glacier vault
- Cleaning up orphaned archives manually
- Restoring from Amazon Glacier vault spans more than 24 hours for single fragment
- Restoring from GLACIER_VAULT takes more than 24 hours for Oracle databases
- Troubleshooting failures due to missing Amazon IAM permissions
- Restore job fails if the restore job start time overlaps with the backup job end time
- Post processing fails for restore from Azure archive
- Troubleshooting Amazon Snowball and Amazon Snowball Edge issues
About object size for cloud storage
During backup, NetBackup divides the backup image data into chunks called objects. PUT request is made for each object to move it to the cloud storage.
By setting a custom Object Size, you can control the amount of PUT and GET requests that are sent to and from the cloud storage. The reduced number of PUT and GET requests help in reducing the total charges that are incurred for the requests.
During the creation of a cloud storage server, you can specify a custom value for the Object Size. Consider the cloud storage provider, hardware, infrastructure, expected performance, and other factors for deciding the value. Once you set the Object Size for a cloud storage server, you cannot change the value. If you want to set a different Object Size, you must recreate the cloud storage server.
The performance of NetBackup in cloud is driven by the combination of object size, number of parallel connections, and the read or write buffer size.
To enhance the performance of backup and restore operations, NetBackup uses multiple parallel connections into cloud storage. The performance of NetBackup depends on the number of parallel connections. Number of parallel connections are derived from the read or write buffer size and the object size.
Read or Write buffer size (user set) ÷ Object Size (user set) = Number of parallel connections (derived). The following diagram illustrates how these factors are related:
The following diagram illustrates how these factors are related:
Consider the following factors when deciding the number of parallel connections:
Maximum number of parallel connections that are permitted by the cloud storage provider.
Network bandwidth availability between NetBackup and the cloud storage environment.
System memory availability on the NetBackup host.
If you increase the object size, the number of parallel connections reduce. The number of parallel connections affect the upload and download rate.
If you increase the read or write buffer size, the number of parallel connections increase. Similarly, if you want lesser number of parallel connections, you can reduce the read or write buffer size. However, you must consider the network bandwidth and the system memory availability.
Cloud providers charge for the number of PUT and GET requests that are initiated during a backup or restore process. The smaller the object size, higher the number of PUT or GET requests, and therefore, higher charges are incurred.
In case of temporary failures with data transfer, NetBackup performs multiple retries for transferring the failed objects. In such case, if the failures persist, the complete object is transferred again. Also, with higher latency and higher packet loss, the performance might reduce. To handle the latency and packet loss issues, increasing the number of parallel connections can be helpful.
NetBackup has some time-outs on the client side. If the upload operation takes more time (due to big object size) than the minimum derived NetBackup data transfer rate, there can be failures with NetBackup.
For legacy environments without deduplication support, if the number of connections are less, parallel downloads will be less compared to older number of connections.
For example, while restoring from back-level images (8.0 and earlier), where the object size is 1MB, the buffer of 16 MB (for one connection) is not completely used while also consuming memory. With the increased object size, there is a restriction on number of connections due to the available read or write buffer size memory.
The default settings are as follows:
Table: Current default settings
Cloud storage provider | CloudCatalyst storage | Non-CloudCatalyst storage | ||
|---|---|---|---|---|
Object size | Default read or write buffer size | Object size | Default read or write buffer size | |
Amazon S3/Amazon GovCloud | 64 MB (fixed) | 64 MB (fixed) | 16 MB (fixed) | 400 MB (configurable between 16 MB to 1 GB) |
Azure | 64 MB (fixed) | 64 MB (fixed) | 4 MB (fixed) | 400 MB (configurable between 4 MB to 1 GB) |
More Information