NetBackup™ for Cloud Object Store Administrator's Guide
- Introduction
 - Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
 - Enhanced backup performance in 11.0 or later
 - Prerequisites for adding Cloud object store accounts
 - Configuring buffer size for backups
 - Configure a temporary staging location
 - Configuring advanced parameters for Cloud object store
 - Permissions required for Amazon S3 cloud provider user
 - Permissions required for Azure blob storage
 - Permissions required for GCP
 - Limitations and considerations
 - Adding Cloud object store accounts
 - Manage Cloud object store accounts
 - Scan for malware
 
 - Protecting Cloud object store assets
- About accelerator support
 - About incremental backup
 - About dynamic multi-streaming
 - About storage lifecycle policies
 - About policies for Cloud object store assets
 - Planning for policies
 - Prerequisites for Cloud object store policies
 - Creating a backup policy
 - Policy attributes
 - Creating schedule attributes for policies
 - Configuring the Start window
 - Configuring the exclude dates
 - Configuring the include dates
 - Configuring the Cloud objects tab
 - Adding conditions
 - Adding tag conditions
 - Examples of conditions and tag conditions
 - Managing Cloud object store policies
 
 - Recovering Cloud object store assets
 - Troubleshooting
- Error 5541: Cannot take backup, the specified staging location does not have enough space
 - Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
 - Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
 - Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
 - After backup, some files in the shm folder and shared memory are not cleaned up.
 - After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
 - Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
 - Backup fails, after you select a scale out server or Snapshot Manager as a backup host
 - Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
 - Recovery for the original bucket recovery option starts, but the job fails with error 3601
 - Recovery Job does not start
 - Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
 - Access tier property not restored after overwriting the existing object in the original location
 - Reduced accelerator optimization in Azure for OR query with multiple tags
 - Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
 - Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
 - The Cloud object store account has encountered an error
 - The bucket is list empty during policy selection
 - Creating a second account on Cloudian fails by selecting an existing region
 - Restore failed with 2825 incomplete restore operation
 - Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
 - AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
 - Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
 - Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
 - Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
 - Empty directories are not backed up in Azure Data Lake
 - Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
 - Recovery error: "Invalid parameter specified"
 - Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
 - Cloud store account creation fails with incorrect credentials
 - Discovery failures due to improper permissions
 - Restore failures due to object lock
 
 
Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
Workaround
Use any of these two workarounds:
Use path-style URL to access the bucket: Since the path-style URL adds the bucket as a part of the URL path and not as a host name, we do not get any SSL issues even for buckets with a . (dot) in the name. However, NetBackup default configuration uses Virtual style for all dual-stack URLs like
s3.dualstack.<region-id>.amazonaws.com. We can add an older S3 URL as a path style and can connect with a bucket with a (.) in the name. To do this, you can add a region with a plain S3 endpoint (s3.<region-id>.amazonaws.com) and select the URL Access Style as the path style.Disable SSL: This workaround is not the recommended one, since it replaces the secure endpoint with an unsecure/unencrypted endpoint. After turning off SSL, it disables the peer-host validation of the server certificate. It bypasses the host name match for the virtual host-style URL of bucket (bucket.123.s3.dualstack.us-east-1.amazonaws.com) with the subject name in the certificate (*. s3.dualstack.us-east-1.amazonaws.com).