NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Meeting system requirements
- NetBackup Snapshot Manager host sizing recommendations
- NetBackup Snapshot Manager extension sizing recommendations
- Creating an instance or preparing the host to install NetBackup Snapshot Manager
- Installing container platform (Docker, Podman)
- Creating and mounting a volume to store NetBackup Snapshot Manager data
- Verifying that specific ports are open on the instance or physical host
- Preparing NetBackup Snapshot Manager for backup from snapshot jobs
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Before you begin installing NetBackup Snapshot Manager extensions
- Downloading the NetBackup Snapshot Manager extension
- Installing the NetBackup Snapshot Manager extension on a VM
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- Install extension using the Kustomize and CR YAMLs
- Managing the extensions
- NetBackup Snapshot Manager cloud providers
- Configuration for protecting assets on cloud hosts/VM
- Deciding which feature (on-host agent or agentless) of NetBackup Snapshot Manager is to be used for protecting the assets
- Protecting assets with NetBackup Snapshot Manager's on-host agent feature
- Installing and configuring NetBackup Snapshot Manager agent
- Configuring the NetBackup Snapshot Manager application plug-in
- Configuring an application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- NetBackup Snapshot Manager assets protection
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- About NetBackup Snapshot Manager upgrades
- Supported upgrade path
- Upgrade scenarios
- Preparing to upgrade NetBackup Snapshot Manager
- Upgrading NetBackup Snapshot Manager
- Upgrading NetBackup Snapshot Manager using patch or hotfix
- Migrating and upgrading NetBackup Snapshot Manager
- GCP configuration for migration from zone to region
- Post-upgrade tasks
- Post-migration tasks
- Uninstalling NetBackup Snapshot Manager
- Preparing to uninstall NetBackup Snapshot Manager
- Backing up NetBackup Snapshot Manager
- Unconfiguring NetBackup Snapshot Manager plug-ins
- Unconfiguring NetBackup Snapshot Manager agents
- Removing the NetBackup Snapshot Manager agents
- Removing NetBackup Snapshot Manager from a standalone Docker host environment
- Removing NetBackup Snapshot Manager extensions - VM-based or managed Kubernetes cluster-based
- Restoring NetBackup Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- SQL snapshot or restore and granular restore operations fail if the Windows instance loses connectivity with the NetBackup Snapshot Manager host
- Disk-level snapshot restore fails if the original disk is detached from the instance
- Discovery is not working even after assigning system managed identity to the control node pool
- Performance issue with GCP backup from snapshot
- Post migration on host agents fail with an error message
- File restore job fails with an error message
- Acknowledgment not received for datamover
- Upgrade of extension on AWS (EKS) fails when upgrading through script
- Backup from snapshot job fails with timeout error
Configuring staging location for Azure Stack Hub VMs to restore from backup
The Azure Stack Hub requires you to create a container, inside your storage account, and use it as a staging location when you restore from backup images. The staging location is used to stage the unmanaged disks in the container during restores. Once the data is written to the disk, the disks are converted to managed disks. This is a requirement from the Azure Stack Hub platform. This is a mandatory configuration, before you can use Azure Stack Hub with NetBackup.
The azurestack.conf file should contain staging location details of the subscription ID, where the VMs are restored. If you plan to restore to any target subscription ID, other than the source subscription ID, then details of the target subscription ID must be present in the azurestack.conf file.
If you are using snapshot images for restore, you do not need to create this staging location.
Note:
The staging location is specific to the subscription ID, you must create one staging location for each subscription that you are using to restore VMs.
To configure a staging location for a subscription ID:
- In the NetBackup Snapshot Manager, navigate to:
/cloudpoint/azurestack.conf,and open the file in a text editor. This file is created, only after you have added Azure Stack Hub as a cloud service provider in NetBackup. - Add the following details in the file:
[subscription/
<subscription ID>]storage_container =
<name of the storage container>storage_account =
/resourceGroup/<name of the resource group where the storage account exists>/storageaccount/<name of storage account>For example:
/resourceGroup/Harsha_RG/storageaccount/harshastorageacc - Repeat step 2, for each subscription ID that you are using. Save and close the file.