NetBackup™ for Cloud Object Store Administrator's Guide

Last Published:
Product(s): NetBackup & Alta Data Protection (11.0)
  1. Introduction
    1.  
      Overview of NetBackup protection for Cloud object store
    2.  
      Features of NetBackup Cloud object store workload support
  2. Managing Cloud object store assets
    1.  
      Planning NetBackup protection for Cloud object store assets
    2.  
      Enhanced backup performance in 11.0 or later
    3.  
      Prerequisites for adding Cloud object store accounts
    4.  
      Configuring buffer size for backups
    5.  
      Configure a temporary staging location
    6.  
      Configuring advanced parameters for Cloud object store
    7.  
      Permissions required for Amazon S3 cloud provider user
    8.  
      Permissions required for Azure blob storage
    9.  
      Permissions required for GCP
    10.  
      Limitations and considerations
    11. Adding Cloud object store accounts
      1.  
        Creating cross-account access in AWS
      2.  
        Check certificate for revocation
      3.  
        Managing Certification Authorities (CA) for NetBackup Cloud
      4.  
        Adding a new region
    12.  
      Manage Cloud object store accounts
    13. Scan for malware
      1.  
        Backup images
      2.  
        Assets by policy type
  3. Protecting Cloud object store assets
    1. About accelerator support
      1.  
        How NetBackup accelerator works with Cloud object store
      2.  
        Accelerator notes and requirements
      3.  
        Accelerator force rescan for Cloud object store (schedule attribute)
      4.  
        Accelerator backup and NetBackup catalog
      5.  
        Calculate the NetBackup accelerator track log size
    2.  
      About incremental backup
    3.  
      About dynamic multi-streaming
    4. About storage lifecycle policies
      1.  
        Adding an SLP
    5.  
      About policies for Cloud object store assets
    6.  
      Planning for policies
    7.  
      Prerequisites for Cloud object store policies
    8.  
      Creating a backup policy
    9.  
      Policy attributes
    10.  
      Creating schedule attributes for policies
    11. Configuring the Start window
      1.  
        Adding, changing, or deleting a time window in a policy schedule
      2.  
        Example of schedule duration
    12.  
      Configuring the exclude dates
    13.  
      Configuring the include dates
    14.  
      Configuring the Cloud objects tab
    15.  
      Adding conditions
    16.  
      Adding tag conditions
    17.  
      Examples of conditions and tag conditions
    18. Managing Cloud object store policies
      1.  
        Copy a policy
      2.  
        Deactivating or deleting a policy
      3.  
        Manually backup assets
  4. Recovering Cloud object store assets
    1.  
      Prerequisites for recovering Cloud object store objects
    2.  
      Configuring Cloud object retention properties
    3.  
      Recovering Cloud object store assets
  5. Troubleshooting
    1.  
      Error 5541: Cannot take backup, the specified staging location does not have enough space
    2.  
      Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
    3.  
      Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
    4.  
      Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
    5.  
      After backup, some files in the shm folder and shared memory are not cleaned up.
    6.  
      After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
    7.  
      Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
    8.  
      Backup fails, after you select a scale out server or Snapshot Manager as a backup host
    9.  
      Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
    10.  
      Recovery for the original bucket recovery option starts, but the job fails with error 3601
    11.  
      Recovery Job does not start
    12.  
      Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
    13.  
      Access tier property not restored after overwriting the existing object in the original location
    14.  
      Reduced accelerator optimization in Azure for OR query with multiple tags
    15.  
      Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
    16.  
      Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
    17.  
      The Cloud object store account has encountered an error
    18.  
      The bucket is list empty during policy selection
    19.  
      Creating a second account on Cloudian fails by selecting an existing region
    20.  
      Restore failed with 2825 incomplete restore operation
    21.  
      Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
    22.  
      AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
    23.  
      Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
    24.  
      Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
    25.  
      Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
    26.  
      Empty directories are not backed up in Azure Data Lake
    27.  
      Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
    28.  
      Recovery error: "Invalid parameter specified"
    29.  
      Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
    30.  
      Cloud store account creation fails with incorrect credentials
    31.  
      Discovery failures due to improper permissions
    32.  
      Restore failures due to object lock

Features of NetBackup Cloud object store workload support

Table: Salient features

Feature

Description

Integration with NetBackup's role-based access control (RBAC)

The NetBackup Web UI provides the Default cloud object store Administrator RBAC role to control which NetBackup users can manage Cloud object store operations in NetBackup. You do not need to be a NetBackup administrator to manage Cloud object stores.

Management of Cloud object store accounts

You can configure a single NetBackup primary server for multiple Cloud object store accounts, across different cloud vendors as required.

Authentication and credentials

Wide emphasis on security. For protecting a single Azure Blob Storage account, the storage account and access key must be specified. To protect the Azure blob storage account, the supported authentication mechanisms are Access key, Service Principal, and Managed Identity. For all S3 API-compliant cloud vendors, the Access key and Secret Key are supported. For Amazon S3, the Access Key, IAM role, and Assume role (for cross-AWS account) mechanisms of authentication are supported.

For a complete list, see the NetBackup Compatibility Lists.

Backup policy

A single backup policy can protect multiple S3 buckets or Azure blob containers from one Cloud object store account.

Intelligent selection of cloud objects

Within a single policy, NetBackup provides flexibility to configure different queries for different buckets or containers. Some buckets or containers can be configured to back up all the objects in them. You can also configure some buckets and containers with intelligent queries to identify objects based on:

  • Object name prefix

  • Entire object name

  • Object tags

  • Files and directories in Azure Data Lake

Fast and optimized backups

In addition to full backup, NetBackup also supports different types of incremental schedules for faster backups. Accelerator feature is also supported for the Cloud object store policies.

Enable checkpoint restart in the policy to be able to resume a failed or suspended job from the last checkpoint. You do not need to repeat the entire data transfer from the start of the job. This feature is not applicable for Dynamic multi-streaming backups.

Granular restore

NetBackup makes it easy to restore all objects in a bucket or container. It also lets you select which objects to restore by using a prefix, folder, or object-based views.

You can narrow down a selection of backup images for restoration in NetBackup by providing a date and time range.

Restore options

NetBackup restore the object store data along with their metadata, properties, tags, ACLs, and object lock properties.

NetBackup supports adding an arbitrary prefix to all objects when restoring. Consequently, it restores objects with a distinct name when it is desired to avoid any interference with the original objects. The Azure Data Lake files and directories, however, do not require a prefix. Instead, the files and directories are restored to a specified alternate location.

By default, NetBackup skips overwriting objects that already exist in the Cloud object store to conserve bandwidth and cloud costs. You can modify this default behavior by using the Overwrite option, thereby enabling the restoration of copies to overwrite the copies stored in the Cloud object store.

Alternate location restores

You can select restore objects to:

  • To the same bucket or container

  • To a different bucket or container in the same account or subscription

  • To a different bucket or container in a different account or subscription.

Support for malware scan before recovery

You can run malware scan of the selected files or folders for recovery as part of the recovery flow from Web UI and decide the recovery actions based on malware scan results.

Dynamic multi-streaming

This feature allows multiple backup streams to operate simultaneously for a single object store bucket/container, optimizing backup performance to meet the backup windows. Here are some advantages of Dynamic multi-streaming:

  • Implicitly distributes the objects for backup across multiple streams.

  • Automate stream creation and data distribution across streams.

  • Prevents stream starvation.

Scalability support for the backup host

NetBackup Cloud object store protection supports configuring the NetBackup Snapshot Manager as a scalable backup host for cloud deployments, along with the media server. If you have an existing NetBackup Snapshot Manager deployment in your environment, you can use that as a backup host for Cloud object store policies.

With NetBackup Snapshot Manager as the backup host, you do not need to configure multiple backup hosts for large jobs or create multiple policies to distribute the load across these backup hosts. Snapshot Manager can increase the number of data mover containers during a backup operation, and then reduce them when the protection tasks are completed.

Object lock

This feature lets you retain the original object lock properties and also provides an option to customize the object lock properties. If you use object lock properties on the restored objects, you can't delete those objects until the retention period is over, or the legal holds are removed. You can use the Object lock and retention properties without any configuration during policy creation and backup.

Quick object change scan

This feature significantly speeds up the NetBackup Accelerator resulting in faster backups.

Apart from an object's modification time, NetBackup also checks for changes in the object's tags and user attributes to determine whether the object qualifies for an incremental backup. In those application environments, where metadata changes do not occur after the objects are created, you can use this option to skip these metadata checks that significantly increases the backup speed.

Enabling this option speeds up change identification for Accelerator, as it skips the tags and attribute checks of the objects since the previous backup. However, the skipped objects are not backed up. Those object metadata changes which does not result in change in the modification time of the object are skipped.

For more information, see the Knowledge article:

https://www.veritas.com/support/en_US/article.100073853.html