NetBackup™ for Cloud Object Store Administrator's Guide
- Introduction
- Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
- Enhanced backup performance in 11.0 or later
- Prerequisites for adding Cloud object store accounts
- Configuring buffer size for backups
- Configure a temporary staging location
- Configuring advanced parameters for Cloud object store
- Permissions required for Amazon S3 cloud provider user
- Permissions required for Azure blob storage
- Permissions required for GCP
- Limitations and considerations
- Adding Cloud object store accounts
- Manage Cloud object store accounts
- Scan for malware
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About dynamic multi-streaming
- About storage lifecycle policies
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Policy attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring the exclude dates
- Configuring the include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Examples of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Error 5541: Cannot take backup, the specified staging location does not have enough space
- Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
- Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
- Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
- After backup, some files in the shm folder and shared memory are not cleaned up.
- After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
- Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
- Backup fails, after you select a scale out server or Snapshot Manager as a backup host
- Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
- Recovery for the original bucket recovery option starts, but the job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwriting the existing object in the original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- The bucket is list empty during policy selection
- Creating a second account on Cloudian fails by selecting an existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Configure a temporary staging location
This configuration is applicable only for Cloud object store protection using dynamic multi-streaming and backup hosts of NetBackup version 11.0 or later.
Enhanced Dynamic multi-streaming backup in NetBackup 11.0, or later, uses a staging location to exchange object data between the object store reader and NetBackup data mover. This staging location must be a filesystem path on the Cloud object store backup host. The storage space on the staging location is reclaimed during the course of a backup as and when the object data and metadata are moved to the backup target. You can configure the storage space and water mark value depending upon the object store environment that you are protecting.
For more information about configuring the temporary storage location, see the Knowledge article:
https://www.veritas.com/content/support/en_US/article.100073852.html
Default paths:
For BYO default path is:
/usr/openv/netbackup/db/cos_tmp_staging_path.For Cloud scale primary and media server: The default temporary stagging location:
/usr/openv/netbackup/db/cos_tmp_staging_pathis symlinked to mounted location/mnt/nbdata/usr/openv/netbackup/db/cos_tmp_staging_pathFor Flex:
The default temporary staging location for a media server:
/usr/openv/netbackup/db/cos_tmp_staging_pathis symlinked to the mounted location/mnt/nbstage/usr/openv/netbackup/db/cos_tmp_staging_pathFor the primary server:
The temporary staging path
/usr/openv/netbackup/db/cos_tmp_staging_pathis symlinked to the mounted location/mnt/nbdata/usr/openv/netbackup/db/cos_tmp_staging_path
For backup host other than Flex, it is recommended to use different path than the default path as it may not comply with the storage requirements. It is also recommended to create a different mount point with enough space. Backup performance is affected if the minimum required space is not available.
Note:
Ensure that the automount configuration entry is updated on the backup host, if you are using a different path other than the default.
Use the following guidelines for sizing of the temporary staging location on a backup host:
The minimum space requirement is 20 GB per backup job per bucket or container being backed up by the backup host.
For increased backup speed, you can configure 100 GB to 500 GB space for staging.
The device used for the temporary staging location must have high read and write throughput. It is recommended to use an SSD. If you still experience slow disk I/O throughput, mount RAM as a disk, and configure as the temporary staging location.
Here is an example command to mount RAM as disk:
sudo mount -t tmpfs -o size=64G tmpfs /mnt/tmp
For 64 GB and 128-GB RAM setups, it is recommend mounting half of RAM as a disk. For a 32-GB RAM setup, it is not recommended to mount RAM as a disk, as low backup performance is expected on large size objects.
Ensure that there is enough free space at the location.
File path should have only ASCII Characters.
Ensure that the NetBackup service user is the owner of the storage location and has 0700 permissions. Writing fails if the permissions are set to anything other than 0700.
The temporary staging location is cleaned up once a backup completes.
Each backup host must have a different mount point for the temporary staging location.
Optionally, you can set the following parameters in the bp.conf file in each backup host, to configure the temporary staging location.