NetBackup™ for Cloud Object Store Administrator's Guide
- Introduction
 - Managing Cloud object store assets
- Planning NetBackup protection for Cloud object store assets
 - Enhanced backup performance in 11.0 or later
 - Prerequisites for adding Cloud object store accounts
 - Configuring buffer size for backups
 - Configure a temporary staging location
 - Configuring advanced parameters for Cloud object store
 - Permissions required for Amazon S3 cloud provider user
 - Permissions required for Azure blob storage
 - Permissions required for GCP
 - Limitations and considerations
 - Adding Cloud object store accounts
 - Manage Cloud object store accounts
 - Scan for malware
 
 - Protecting Cloud object store assets
- About accelerator support
 - About incremental backup
 - About dynamic multi-streaming
 - About storage lifecycle policies
 - About policies for Cloud object store assets
 - Planning for policies
 - Prerequisites for Cloud object store policies
 - Creating a backup policy
 - Policy attributes
 - Creating schedule attributes for policies
 - Configuring the Start window
 - Configuring the exclude dates
 - Configuring the include dates
 - Configuring the Cloud objects tab
 - Adding conditions
 - Adding tag conditions
 - Examples of conditions and tag conditions
 - Managing Cloud object store policies
 
 - Recovering Cloud object store assets
 - Troubleshooting
- Error 5541: Cannot take backup, the specified staging location does not have enough space
 - Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
 - Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
 - Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
 - After backup, some files in the shm folder and shared memory are not cleaned up.
 - After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
 - Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
 - Backup fails, after you select a scale out server or Snapshot Manager as a backup host
 - Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
 - Recovery for the original bucket recovery option starts, but the job fails with error 3601
 - Recovery Job does not start
 - Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
 - Access tier property not restored after overwriting the existing object in the original location
 - Reduced accelerator optimization in Azure for OR query with multiple tags
 - Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
 - Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
 - The Cloud object store account has encountered an error
 - The bucket is list empty during policy selection
 - Creating a second account on Cloudian fails by selecting an existing region
 - Restore failed with 2825 incomplete restore operation
 - Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
 - AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
 - Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
 - Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
 - Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
 - Empty directories are not backed up in Azure Data Lake
 - Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
 - Recovery error: "Invalid parameter specified"
 - Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
 - Cloud store account creation fails with incorrect credentials
 - Discovery failures due to improper permissions
 - Restore failures due to object lock
 
 
Configuring advanced parameters for Cloud object store
You can configure these optional parameters in the backup host.
Table: Advanced parameters for backup host
Parameter  | Default/ Configuration  | Description  | 
|---|---|---|
COSP_LIST_API_ MAX_OBJECTS  | Default: 5000 for Azure backups and 1000 for AWS/S3 backups. Configuration: 
  | During object listing, this value is used as the batch size in the ListObject API. Reducing this value may slow down the overall backup process. It is recommended to set it according to your cloud provider's specifications or recommendations.  | 
COSP_DOWNLOAD_ OBJECT_PART_ SIZE_IN_MB  | Default: 16 Configuration: 1 - 512  | Represents the size of each chunk or part used when downloading large objects in multiple chunks. If the cloud is fast and capable of handling larger chunks efficiently, increasing this value may improve the overall backup speed.  | 
COSP_TOTAL_ OBJECT_DOWNLOAD_ WORKERS  | Default: 160 Configuration: 10 - 1000  | This parameter determines the number of parallel tasks used to download objects from a specified bucket. If the cloud is slow, increasing this value can cause issues such as request failures or retries. However, if the cloud is fast, increasing this value can speed up the backup process. It's also important to note that increasing the number of workers consumes more CPU resources. Since this value applies to each backup job, it's crucial to configure it carefully, especially when running multiple parallel jobs.  | 
COSP_STREAM_ CHANNEL_BUFFER_ SIZE  | Default: 100 Configuration: 100 - 5000  | This parameter restricts the total buffer size/number of objects to keep in the cache for processing after listing. If multiple streams are running, then this is the buffer/cache size of each stream.  | 
COSP_DB_INSERT _BATCH_SIZE  | Default: 500 Configuration: 500 - 5000  | Represents the number of downloaded objects that are sent to bpbkar for final processing. Increasing this value to an optimistic number, such as 1000, can boost the overall backup speed, provided the staging location has sufficient space to handle the batch size. However, if the staging location lacks enough space, increasing this value may slow down the backup process.  | 
COSP_STAGING_ LOC_WATER_MARK_ IN_PERCENTAGE  | Default: 80 Configuration: 30 - 80  | This parameter restricts the usages of the staging location. It accepts the value in percentage. The default value is 80. The range is 30 to 80. For example, COSP_STAGING_LOC_ WATER_MARK_IN _PERCENTAGE= 70  | 
COSP_SPACE_ MANAGEMENT_TIMEOUT _IN_MIN  | Default: 5 Configuration: 5 - 60  | The time to wait before starting a backup, if the required space is not available on the temporary staging location. The value is in minutes. Default value = 10. The range is 5 to 60 (minutes). For example: COSP_SPACE_ MANAGEMENT_TIMEOUT _IN_MIN = 20  | 
You can configure this optional parameter in the primary server.
Table:
Parameter  | Default/ Configuration  | Description  | 
|---|---|---|
DYNAMIC_STREAMING _START_CHILD_BACKUP _JOBS_TIMEOUT  | Default: 600 (Seconds) Configuration: 600 - 3600 (Seconds)  | This timeout value determines how long each parent job waits to start all its child jobs, when multiple jobs are triggered with multiple streams. If the timeout is reached, the job fails with a crawler timeout error. We recommend setting this value as high as possible if multiple backups are running. Note that this setting takes effect only before running a backup job.  |