NetBackup™ for Cloud Object Store Administrator's Guide
- Introduction
- Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
- Enhanced backup performance in 11.0 or later
- Prerequisites for adding Cloud object store accounts
- Configuring buffer size for backups
- Configure a temporary staging location
- Configuring advanced dynamic multi-streaming parameters
- Permissions required for Amazon S3 cloud provider user
- Permissions required for Azure blob storage
- Permission required for Azure Data Lake Storage
- Permissions required for GCP
- Limitations and considerations
- Adding Cloud object store accounts
- Manage Cloud object store accounts
- Scan for malware
- Protecting Cloud object store assets
- About accelerator support
- About incremental backup
- About dynamic multi-streaming
- About object change tracking
- Configuring object change tracking
- Configuring access permissions for the buckets
- Configuring access policy on the log bucket
- Configuration guidelines for IBM Storage Ceph
- Enable bucket logging for source buckets
- Creating policy for the log bucket
- Additional storage requirements at the staging location
- Configuring bucket logging in IBM Storage Ceph
- Maintaining the log bucket
- Configuring NetBackup for object change tracking
- Configuring NetBackup policy for object change tracking
- Verifying object change tracking in the Activity monitor
- Scenarios when NetBackup overrides object change tracking
- About storage lifecycle policies
- About policies for Cloud object store assets
- Planning for policies
- Prerequisites for Cloud object store policies
- Creating a backup policy
- Policy attributes
- Creating schedule attributes for policies
- Configuring the Start window
- Configuring the exclude dates
- Configuring the include dates
- Configuring the Cloud objects tab
- Adding conditions
- Adding tag conditions
- Examples of conditions and tag conditions
- Managing Cloud object store policies
- Recovering Cloud object store assets
- Troubleshooting
- Error 5549: Cannot validate bucket logging information
- Error 5576: The maximum number of concurrent jobs specified for a storage unit, must be greater than or equal to the number of streams specified in the policy.
- Error 5579: Falling back to object listing for change detection, not considering object change tracking for this bucket, specified in the policy.
- Error 5580: The specified failover strategy for the storage unit group is incompatible with the Cloud object store policy, with dynamic multi-streaming.
- Error 5545: Backup failed as NetBackup cannot parse records from the log object
- Error 5541: Cannot take backup, the specified staging location does not have enough space
- Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
- Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
- Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
- After backup, some files in the shm folder and shared memory are not cleaned up.
- After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
- Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
- Backup fails, after you select a scale out server or Snapshot Manager as a backup host
- Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
- Recovery for the original bucket recovery option starts, but the job fails with error 3601
- Recovery Job does not start
- Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
- Access tier property not restored after overwriting the existing object in the original location
- Reduced accelerator optimization in Azure for OR query with multiple tags
- Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
- Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
- The Cloud object store account has encountered an error
- The bucket is list empty during policy selection
- Creating a second account on Cloudian fails by selecting an existing region
- Restore failed with 2825 incomplete restore operation
- Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
- AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
- Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
- Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
- Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
- Empty directories are not backed up in Azure Data Lake
- Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
- Recovery error: "Invalid parameter specified"
- Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
- Cloud store account creation fails with incorrect credentials
- Discovery failures due to improper permissions
- Restore failures due to object lock
Enable bucket logging for source buckets
Refer to the IBM Storage Ceph documentation to enable bucket logging on the source bucket using the Ceph console. This section explains how to enable bucket logging using the S3 API.
Method: PUT
API Endpoint:
http://<IP-of-CEPH-server>:<CEPH-RGW-Port>/<source-bucket-name>?logging
Where:
IP-of-CEPH-server - IP address of the Ceph server.
CEPH-RGW-Port - Port of the S3 gateway.
source-bucket-name - the source bucket on which you want to enable bucket logging.
API payload:
<?xml version="1.0" encoding="UTF-8"?>
<BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LoggingEnabled>
<TargetBucket><target-bucket-name></TargetBucket>
<TargetPrefix><target-prefix></TargetPrefix>
<ObjectRollTime>300</ObjectRollTime>
<LoggingType>Journal</LoggingType>
<RecordsBatchSize>0</RecordsBatchSize>
<TargetObjectKeyFormat>
<PartitionedPrefix>
<PartitionDateSource>EventTime</PartitionDateSource>
</PartitionedPrefix>
</TargetObjectKeyFormat>
</LoggingEnabled>
</BucketLoggingStatus>Where:
<target-bucket-name> is the bucket where target logs are generated.
<target-prefix> defines the prefix (path) that Amazon S3 uses to store log files in the target bucket. For example, the logs may be stored as:
s3://target-bucket-name/targetPrefix/2025-10-28-12-34-56-123456789ABCDEFObjectRollTime specifies how often the logging system rotates log files - closing the current log object and starting a new one.
LoggingType specifies the format or mode used to write logs to the target bucket. Must be Journal.
RecordsBatchSize specifies the number of log records (events) the system collects before writing them to the target log object.
PartitionDateSource specifies which timestamp the logging system uses to create date-based partitions (subfolders) for log objects in the target bucket. Must be EventTime.
Method: GET
API Endpoint:
http://<IP-of-CEPH-server>:<CEPH-RGW-Port>/<source-bucket-name>?logging
Where:
IP-of-CEPH-server - IP address of the Ceph server.
CEPH-RGW-Port - Port of the S3 gateway.
source-bucket-name - the source bucket on which you want to enable bucket logging.
Expected output:
<BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LoggingEnabled>
<TargetBucket>log-destination</TargetBucket>
<TargetPrefix>abc/</TargetPrefix>
<ObjectRollTime>300</ObjectRollTime>
<LoggingType>Journal</LoggingType>
<RecordsBatchSize>0</RecordsBatchSize>
<TargetObjectKeyFormat>
<PartitionedPrefix>
<PartitionDateSource>EventTime</PartitionDateSource>
</PartitionedPrefix>
</TargetObjectKeyFormat>
</LoggingEnabled>
</BucketLoggingStatus>