NetBackup™ for Cloud Object Store Administrator's Guide

Last Published:
Product(s): NetBackup & Alta Data Protection (11.0)
  1. Introduction
    1.  
      Overview of NetBackup protection for Cloud object store
    2.  
      Features of NetBackup Cloud object store workload support
  2. Managing Cloud object store assets
    1.  
      Planning NetBackup protection for Cloud object store assets
    2.  
      Enhanced backup performance in 11.0 or later
    3.  
      Prerequisites for adding Cloud object store accounts
    4.  
      Configuring buffer size for backups
    5.  
      Configure a temporary staging location
    6.  
      Configuring advanced parameters for Cloud object store
    7.  
      Permissions required for Amazon S3 cloud provider user
    8.  
      Permissions required for Azure blob storage
    9.  
      Permissions required for GCP
    10.  
      Limitations and considerations
    11. Adding Cloud object store accounts
      1.  
        Creating cross-account access in AWS
      2.  
        Check certificate for revocation
      3.  
        Managing Certification Authorities (CA) for NetBackup Cloud
      4.  
        Adding a new region
    12.  
      Manage Cloud object store accounts
    13. Scan for malware
      1.  
        Backup images
      2.  
        Assets by policy type
  3. Protecting Cloud object store assets
    1. About accelerator support
      1.  
        How NetBackup accelerator works with Cloud object store
      2.  
        Accelerator notes and requirements
      3.  
        Accelerator force rescan for Cloud object store (schedule attribute)
      4.  
        Accelerator backup and NetBackup catalog
      5.  
        Calculate the NetBackup accelerator track log size
    2.  
      About incremental backup
    3.  
      About dynamic multi-streaming
    4. About storage lifecycle policies
      1.  
        Adding an SLP
    5.  
      About policies for Cloud object store assets
    6.  
      Planning for policies
    7.  
      Prerequisites for Cloud object store policies
    8.  
      Creating a backup policy
    9.  
      Policy attributes
    10.  
      Creating schedule attributes for policies
    11. Configuring the Start window
      1.  
        Adding, changing, or deleting a time window in a policy schedule
      2.  
        Example of schedule duration
    12.  
      Configuring the exclude dates
    13.  
      Configuring the include dates
    14.  
      Configuring the Cloud objects tab
    15.  
      Adding conditions
    16.  
      Adding tag conditions
    17.  
      Examples of conditions and tag conditions
    18. Managing Cloud object store policies
      1.  
        Copy a policy
      2.  
        Deactivating or deleting a policy
      3.  
        Manually backup assets
  4. Recovering Cloud object store assets
    1.  
      Prerequisites for recovering Cloud object store objects
    2.  
      Configuring Cloud object retention properties
    3.  
      Recovering Cloud object store assets
  5. Troubleshooting
    1.  
      Error 5541: Cannot take backup, the specified staging location does not have enough space
    2.  
      Error 5537: Backup failed: Incorrect read/write permissions are specified for the download staging path.
    3.  
      Error 5538: Cannot perform backup. Incorrect ownership is specified for the download staging path.
    4.  
      Reduced acceleration during the first full backup, after upgrade to versions 10.5 and 11.
    5.  
      After backup, some files in the shm folder and shared memory are not cleaned up.
    6.  
      After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older policies
    7.  
      Backup fails with default number of streams with the error: Failed to start NetBackup COSP process.
    8.  
      Backup fails, after you select a scale out server or Snapshot Manager as a backup host
    9.  
      Backup fails or becomes partially successful on GCP storage for objects with content encoded as GZIP.
    10.  
      Recovery for the original bucket recovery option starts, but the job fails with error 3601
    11.  
      Recovery Job does not start
    12.  
      Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network connection broken"
    13.  
      Access tier property not restored after overwriting the existing object in the original location
    14.  
      Reduced accelerator optimization in Azure for OR query with multiple tags
    15.  
      Backup failed and shows a certificate error with Amazon S3 bucket names containing dots (.)
    16.  
      Azure backup jobs fail when space is provided in a tag query for either tag key name or value.
    17.  
      The Cloud object store account has encountered an error
    18.  
      The bucket is list empty during policy selection
    19.  
      Creating a second account on Cloudian fails by selecting an existing region
    20.  
      Restore failed with 2825 incomplete restore operation
    21.  
      Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
    22.  
      AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
    23.  
      Backup for Azure Data Lake fails when a back-level media server is used with backup host or storage server version 10.3
    24.  
      Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of client
    25.  
      Recovery for Azure Data Lake fails: "This operation is not permitted as the path is too deep"
    26.  
      Empty directories are not backed up in Azure Data Lake
    27.  
      Recovery error: "Invalid alternate directory location. You must specify a string with length less than 1025 valid characters"
    28.  
      Recovery error: "Invalid parameter specified"
    29.  
      Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
    30.  
      Cloud store account creation fails with incorrect credentials
    31.  
      Discovery failures due to improper permissions
    32.  
      Restore failures due to object lock

Accelerator notes and requirements

Note the following about the NetBackup accelerator:

  • NetBackup accelerator must be properly licensed. For the latest information on licensing, contact your NetBackup sales or partner representative.

  • Supports the disk storage units only. Supported storage includes Media Server Deduplication Pool, NetBackup appliance, cloud storage, and qualified third-party OST storage. For supported storage types, see the NetBackup Enterprise Server and Server - Hardware and Cloud Storage Compatibility List at the following URL: http://www.netbackup.com/compatibility

  • Storage unit groups are supported only if the storage unit selection in the group is Failover.

  • Supports full backups and incremental backups.

  • For every policy that enables the Use Accelerator option, the following backup schedules are recommended at a minimum: A full backup schedule with the Accelerator forced rescan option enabled. Another full backup schedule without the Accelerator forced rescan option enabled. See Accelerator force rescan for Cloud object store (schedule attribute).

  • If a previous backup of the policy, bucket and query does not exist on the backup host or scale-out server, NetBackup performs a full backup, and creates a track log on the backup host or scale-out server. This initial backup occurs at the speed of a normal (not accelerated) full backup. Subsequent Accelerator backups using the same backup host or scale-out server use the track log for accelerated backup speed.

    Note:

    When you first enable a policy to use accelerator, the next backup (whether full or incremental) is in effect a full backup. It backs up all objects corresponding to Cloud objects queries. If that backup was scheduled as an incremental, it may not be completed within the backup window.

  • NetBackup retains track logs for future accelerator backups. Whenever you add a query, NetBackup does a full, non-accelerated backup for the queries that are added to the list. The unchanged queries are processed as normal accelerator backups.

  • If the storage unit that is associated with the policy cannot be validated when you create the policy, it is validated later, when the backup job begins. If accelerator does not support the storage unit, the backup fails. In the bpbrm log, a message appears that is similar to one of the following: Storage server %s, type %s, does not support image include. Storage server type %s, does not support accelerator backup.

  • Accelerator requires that the storage have the OptimizedImage attribute enabled.

  • The Expire after copy retention can cause images to expire while the backup runs. To synthesize a new full backup, the SLP-based accelerator backup needs the previous backup.

  • To detect changes in metadata, NetBackup uses one or more cloud APIs per object/blob. Hence, change detection time increases with the number of objects/blobs to be processed. You may observe backups running longer than expected in cases with little or no data change but a large number of objects.

  • If in your environment, for a given object, the metadata or tag is always changed (added/removed/updated) with its data. Evaluate using incremental without accelerator over incremental with accelerator from a performance and cost viewpoint.

  • While creating a Cloud object store policy with multiple tag-based queries, you can use a few simple rules to get the best effect with accelerator. Use the query builder in the policy creation page, and create separate queries, one query per tag. The accelerator-based policies perform best in this configuration.

    NetBackup has two ways to determine whether an object is considered for incremental backup or not.

    • The object's modification time.

    • Any changes in the tags and user attributes.

      These metadata checks are not required for the organization environments where the metadata of the objects do not change after the objects are created. You can use the Quick Object change scan option in the policy to avoid these metadata checks, which lead to faster change detection and faster accelerated backup.