NetBackup™ Web UI Cloud Object Store Administrator's Guide
- Introduction
- Managing Cloud object store assets
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Setting up attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring exclude dates
- Configuring include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Example of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Recovery for Cloud object store using web UI for original bucket recovery option starts but job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwrite existing to original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup is failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup job fails when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- Bucket list empty when selecting it in policy selection
- Creating second account on Cloudian fails by selecting existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of cloud provider fails when adding bucket in Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added in the target domain.
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Backup is failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
Workaround
Use any of these two workarounds:
Use path style URL to access bucket: Since the path style URL adds bucket as part of URL path and not as a host name, we do not get any SSL issues even for buckets with a . (dot) in the name. However, NetBackup default configuration uses Virtual style for all dual-stack URLs like
s3.dualstack.<region-id>.amazonaws.com. We can add an older s3 URL as path style and can connect with bucket with a (.) in the name. To do this we can add a region with plain s3 endpoint (s3.<region-id>.amazonaws.com) and selecting URL Access Style as path style.Disable SSL: This workaround is not the recommended one since it replaces the secure endpoint with unsecure/unencrypted endpoint. After turning off SSL it disables the peer-host validation of server certificate. It bypasses the host name match for virtual host style URL of bucket (bucket.123.s3.dualstack.us-east-1.amazonaws.com) with subject name in certificate (*. s3.dualstack.us-east-1.amazonaws.com).