Please enter search query.
Search <book_title>...
NetBackup™ for Cloud Object Store Administrator's Guide
Last Published:
2025-03-18
Product(s):
NetBackup (11.0)
- Introduction
- Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
- Enhanced backup performance in 11.0 or later
- Prerequisites for adding Cloud object store accounts
- Configuring buffer size for backups
- Configure a temporary staging location
- Configuring advanced parameters for Cloud object store
- Permissions required for Amazon S3 cloud provider user
- Permissions required for Azure blob storage
- Permissions required for GCP
- Limitations and considerations
- Adding Cloud object store accounts
- Manage Cloud object store accounts
- Scan for malware
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About dynamic multi-streaming
- About storage lifecycle policies
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Policy attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring the exclude dates
- Configuring the include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Examples of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Error 5541: Cannot take backup, the specified staging location does not have enough space
- Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
- Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
- Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
- After backup, some files in the shm folder and shared memory are not cleaned up.
- After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
- Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
- Backup fails, after you select a scale out server or Snapshot Manager as a backup host
- Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
- Recovery for the original bucket recovery option starts, but the job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwriting the existing object in the original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- The bucket is list empty during policy selection
- Creating a second account on Cloudian fails by selecting an existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Restore failures due to object lock
Explanation:
During restore, if you select the option, NetBackup applies the object lock properties.
Viewing the Activity monitor logs:
Warning bpbrm (pid=21103) from client ip-10-176-97-167.us-east-2.compute.internal: WRN - Cannot set Object lock on the object. Access to perform the operation was denied. Jul 25, 2023 11:26:00 AM - Error bpbrm (pid=21103) from client ip-10-176-97-167.us-east-2.compute.internal: ERR - Cannot complete restore for any of the objects. Jul 25, 2023 11:26:00 AM - Warning bpbrm (pid=21103) from client ip-10-176-97-167.us-east-2.compute.internal: WRN - The 3 files restored partially as object lock cannot be applied. Jul 25, 2023 11:26:00 AM - Info tar (pid=1697) done. status 5
Viewing the nbcosp logs:
{"level":"info","SDK log body":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied
</Code><Message>Access Denied</Message><RequestId>ZNT4GXHP70HX573A</RequestId>
<HostId>
3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==</HostId>
</Error>\n","time":"2023-07-25T05:56:00.708117368Z","caller":
"internal/logging.ExtendedLog.Log:zerolog_wrapper.go:18","message":"SDK log entry"}
{"level":"debug","status code":403,"errmsg":"AccessDenied:
Access Denied\n\tstatus code: 403, request id: ZNT4GXHP70HX573A,
host id: 3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"time":"2023-07-25T05:56:00.708145345Z","caller":"main.s3StatusCode:s3_ops.go:8447",
"message":"s3StatusCode(): get http status code"}
{"level":"error","error":"AccessDenied: Access Denied\n\tstatus code: 403,
request id: ZNT4GXHP70HX573A,
host id: 3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"object key":"cudtomer35jul/squash.txt","time":"2023-07-25T05:56:00.708160142Z",
"caller":"main.(*OCSS3).commitBlockList:s3_ops.go:2655",
"message":"s3Storage.svc.PutObjectRetention Failed to Put ObjectRetention"}Workaround:
You must have the required permissions for object retention. These are the necessary permissions that your role must have:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ObjectLock",
"Effect": "Allow",
"Action": [
"s3:PutObjectRetention",
"s3:BypassGovernanceRetention"
],
"Resource": [
"*"
]
}
]
}