NetBackup™ for Cloud Object Store Administrator's Guide
- Introduction
- Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
- Enhanced backup performance in 11.0 or later
- Prerequisites for adding Cloud object store accounts
- Configuring buffer size for backups
- Configure a temporary staging location
- Configuring advanced parameters for Cloud object store
- Permissions required for Amazon S3 cloud provider user
- Permissions required for Azure blob storage
- Permissions required for GCP
- Limitations and considerations
- Adding Cloud object store accounts
- Manage Cloud object store accounts
- Scan for malware
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About dynamic multi-streaming
- About storage lifecycle policies
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Policy attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring the exclude dates
- Configuring the include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Examples of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Error 5541: Cannot take backup, the specified staging location does not have enough space
- Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
- Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
- Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
- After backup, some files in the shm folder and shared memory are not cleaned up.
- After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
- Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
- Backup fails, after you select a scale out server or Snapshot Manager as a backup host
- Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
- Recovery for the original bucket recovery option starts, but the job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwriting the existing object in the original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- The bucket is list empty during policy selection
- Creating a second account on Cloudian fails by selecting an existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Recovering Cloud object store assets
You can recover Cloud object store assets to the original or a different bucket or container. You can also restore each of the objects to different buckets or containers.
To recover assets:
- On the left, click Recovery. Under Regular recovery, click Start recovery.
- In the Basic properties page, select Policy type as Cloud-Object-Store.
- Click the Buckets/Containers field to select assets to restore.
In the Add bucket/container dialog, the default option displays all available bucket/containers with completed backups. You can search the table using the search box.
To add a specific bucket or container, select Add the bucket/container details option. If you have selected an Azure Data Lake workload, select Add files/directories.
Select the cloud provider, and enter the bucket/container name, and the Cloud object store account name. For Azure workloads, specify the storage account name, if available in the UI.
Note:
In a rare scenario, if you cannot find the required bucket listed in the table for selection. But you can see the same bucket listed in the catalog view as a backup ID. You can select the bucket by manually entering the bucket name, provider ID, and the Cloud object store account name as per the backup ID. The backup ID is formed as
<providerId>_<cloudAccountname>_<uniquename>_<timestamp>for Azure the
uniquenameisstorageaccountname.bucketname, and for S3 providers it is the bucket name.
- Click Add, and then click Next.
- In the Add objects page, select the Start date and the End date of the period from which you want to restore.
(Optionally) Enter a keyword phrase to filter the images, and click Apply.
- Click Backup history, and select the required images for recovery from the Backup history dialog. Click Select.
- In the Recovery details page, you can add the objects and folders or prefix and scan the selected images for malware before restoring the images:
(Optional) Click Add objects and folders, and select the required objects to recover from the Add Object/blobs and folders dialog. Select Include all objects/blobs and folders to include all available assets. For an Azure Data Lake workload, this option is available as Include all files/directories. You can use the left navigation tree structure to filter the table. Click Add.
The following warning message is displayed when images which are not scanned are selected for recovery:
One or more images selected for recovery are not scanned.
Note:
To restore from malware-affected images, you must have the Administrator role or equivalent RBAC permissions.
For more information on recovering from malware infected images, see Security and Encryption Guide.
(Optionally) Click Add prefix. In the Add prefix dialog, enter a prefix in the search box to display relevant results in the table. Click Add, to select all the matching prefixes displayed in the table for recovery. The selected prefixes are displayed in a table below the selected objects/blobs. Click Next.
Note:
Clean file recovery (Skip infected files) as part of recovery is not supported for Cloud-Object-Store.
- In the Recovery options page, you can select whether you want to restore to the source bucket of the container or use a different one. These are the Object restore options:
Restore to the original bucket or container: Select to recover to the same bucket or container from where the backup was taken.
Optionally:
Add a prefix for the recovered assets in the Add a prefix field.
If you have selected an Azure Data Lake workload, enter the Directory to restore.
Restore to a different bucket or container: Select to recover to a different bucket or container than the one from where the backup was taken.
You can select a different Cloud object store account as the destination, from the list above.
Select a destination Bucket/Container name. You can use different Cloud object store accounts that can access the original bucket. This method also helps you create accounts with limited and specific permissions for backup and restore. In this case, you can provide the same bucket as the original to restore to the original bucket/container.
Optionally, add a prefix for the recovered assets in the Add a prefix field.
Restore object/blobs or prefixes to different destinations: Select to recover each of your selected assets to different destinations.
You can select a different Cloud object store account as the destination from the list.
Click Edit object destination, enter the Destination and Destination bucket/container name. Click Save.
Note:
If you have selected Include all objects/blobs and folders, in step 7, the Restore objects/blobs or prefixes to different destinations option is disabled.
- Select a Recovery host. The recovery host that is associated with the Cloud object store account is displayed by default. If required, change the Backup host. If the Cloud object store account uses a scale-out server, this field is disabled.
- Optionally, to overwrite any existing object or blobs using the recovered assets, select Overwrite existing objects/blobs.
- (Optional) To override the default priority of the restore job, select Override default priority, and assign the required value.
- In the Advanced restore options:
To apply the original object lock attributes from the backed-up objects, select Retain original object lock properties.
To change the values of different properties, select Customize object lock properties. From the Object lock mode list:
Select Compliance or Governance for Amazon or other S3 workloads.
Select Locked or Unlocked for Azure workloads.
Select a future date and time till which the object lock is valid. Note that the recovered object is locked till this specified date and time.
Select Object lock legal hold status to implement it on the restored objects.
See Configuring Cloud object retention properties.
The Advanced restore options are not applicable to the Azure Data Lake workload.
- In the Review page, view the summary of all the selections that you made, and click:
Start recovery
Or
You can see the progress of the restore job in the Activity monitor.