NetBackup™ Web UI Cloud Object Store Administrator's Guide
- Introduction
- Managing Cloud object store assets
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Setting up attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring exclude dates
- Configuring include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Example of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Recovery for Cloud object store using web UI for original bucket recovery option starts but job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwrite existing to original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup is failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup job fails when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- Bucket list empty when selecting it in policy selection
- Creating second account on Cloudian fails by selecting existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of cloud provider fails when adding bucket in Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added in the target domain.
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Limitations and considerations
Consider the following when protecting Cloud object store workloads.
If you upgrade to NetBackup version 10.3, from versions 10.1 or 10.2, the following limitations apply.
You can create Cloud object store accounts with backup hosts or scale-out servers of version 10.3 or later only. Cannot update an existing Cloud object store account that is created on NetBackup 10.3 or later with backup hosts or scale-out servers older than version 10.3.
You can create policies only with backup hosts or scale-out servers of version 10.3 or later. Cannot update an existing policy that is created on NetBackup 10.3 or later with backup hosts or scale-out servers older than version 10.3.
The following credential types are supported with backup hosts or scale-out servers older than version 10.3 only: For Azure: Service principal and Managed identity. For AWS: Assume role (EC2).
Restores with object lock properties are supported for backup hosts or scale-out servers of version 10.3 or later only.
Backup and restore of buckets with default retention enabled are supported with backup hosts or scale-out servers of version 10.3 or later only.
For Azure, if you update a policy created with NetBackup version prior to 10.3, with a backup host or scale-out server of version 10.3 or later, the backups fail. As a workaround, update all the buckets to use the new format of the provided generated ID with the existing queries. Note that you must create the associated Cloud object store account in the policy, using NetBackup 10.3 or later, for this workaround to be successful.
Discovery is supported for NetBackup version 10.3 or later, deployed on RHEL. If no supported host is available, then discovery does not start for any of the configured Cloud storage accounts. In this case, discovery status is not available, and you cannot see a bucket list during policy creation. Even if you add the buckets manually after discovery fails, your backups may fail. Upgrade at least one supported backup host or scale-out server and create a new policy.
If an Azure blob has the Content Encoding property assigned as "gzip" or "x-gzip", NetBackup does not download that blob and backup. The backup job is marked as partially done. You can see the following message in the activity monitor:
Error bpbrm (pid=XXX) from client XYZ_CLIENT: ERR - File /ABC.zzz shrunk by <size> bytes, padding with zeros.If you update a policy that is created on a NetBackup version prior to 10.3, consider the following after a backup:
After backup, you may see two versions of the same buckets, for the old and new formats. If you want to restore old data, select the bucket in old format. For newer backups, select the ones in newer format.
The subsequent backup after update is a full backup irrespective of what is configured in the policy.
When you upgrade to 10.3, the first Azure blob accelerated backup takes backup of all objects in the selection, even if the configured backup is incremental. This full backup is required for the change in metadata properties for the Azure blobs between NetBackup version 10.2 and 10.3. The subsequent incremental backups back up only the changed objects.
If you use a Cloud object store account created in version older than 10.3, NetBackup discovers the buckets with old format, where: uniqueName=bucketName.
NetBackup does not allow the following combinations as part of a prefix or object query:
prefix = / prefix = /folder1 prefix = /object1 prefix = folder1// object = /obj1