NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Meeting system requirements
- NetBackup Snapshot Manager host sizing recommendations
- NetBackup Snapshot Manager extension sizing recommendations
- Creating an instance or preparing the host to install NetBackup Snapshot Manager
- Installing container platform (Docker, Podman)
- Creating and mounting a volume to store NetBackup Snapshot Manager data
- Verifying that specific ports are open on the instance or physical host
- Preparing NetBackup Snapshot Manager for backup from snapshot jobs
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Before you begin installing NetBackup Snapshot Manager extensions
- Downloading the NetBackup Snapshot Manager extension
- Installing the NetBackup Snapshot Manager extension on a VM
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- Install extension using the Kustomize and CR YAMLs
- Managing the extensions
- NetBackup Snapshot Manager cloud providers
- Configuration for protecting assets on cloud hosts/VM
- Deciding which feature (on-host agent or agentless) of NetBackup Snapshot Manager is to be used for protecting the assets
- Protecting assets with NetBackup Snapshot Manager's on-host agent feature
- Installing and configuring NetBackup Snapshot Manager agent
- Configuring the NetBackup Snapshot Manager application plug-in
- Configuring an application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- NetBackup Snapshot Manager assets protection
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- About NetBackup Snapshot Manager upgrades
- Supported upgrade path
- Upgrade scenarios
- Preparing to upgrade NetBackup Snapshot Manager
- Upgrading NetBackup Snapshot Manager
- Upgrading NetBackup Snapshot Manager using patch or hotfix
- Migrating and upgrading NetBackup Snapshot Manager
- GCP configuration for migration from zone to region
- Post-upgrade tasks
- Post-migration tasks
- Uninstalling NetBackup Snapshot Manager
- Preparing to uninstall NetBackup Snapshot Manager
- Backing up NetBackup Snapshot Manager
- Unconfiguring NetBackup Snapshot Manager plug-ins
- Unconfiguring NetBackup Snapshot Manager agents
- Removing the NetBackup Snapshot Manager agents
- Removing NetBackup Snapshot Manager from a standalone Docker host environment
- Removing NetBackup Snapshot Manager extensions - VM-based or managed Kubernetes cluster-based
- Restoring NetBackup Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- SQL snapshot or restore and granular restore operations fail if the Windows instance loses connectivity with the NetBackup Snapshot Manager host
- Disk-level snapshot restore fails if the original disk is detached from the instance
- Discovery is not working even after assigning system managed identity to the control node pool
- Performance issue with GCP backup from snapshot
- Post migration on host agents fail with an error message
- File restore job fails with an error message
- Acknowledgment not received for datamover
- Upgrade of extension on AWS (EKS) fails when upgrading through script
- Backup from snapshot job fails with timeout error
Disk-level snapshot restore fails if the original disk is detached from the instance
This issue occurs if you are performing a disk-level snapshot restore to the same location.
When you trigger a disk-level snapshot restore to the same location, NetBackup first detaches the existing original disk from the instance, creates a new volume from the disk snapshot, and then attaches the new volume to the instance. The original disk is automatically deleted after the restore operation is successful.
However, if the original disk whose snapshot is being restored is manually detached from the instance before the restore is triggered, the restore operation fails.
You may see the following message on the NetBackup UI:
Request failed unexpectedly: [Errno 17] File exists: '/<app.diskmount>'
The NetBackup coordinator logs contain messages similar to the following:
flexsnap.coordinator: INFO - configid : <app.snapshotID> status changed to
{u'status': u'failed', u'discovered_time': <time>, u'errmsg': u'
Could not connect to <application> server localhost:27017:
[Errno 111]Connection refused'}Workaround:
If the restore has already failed in the environment, you may have to manually perform a disk cleanup first and then trigger the restore job again.
Perform the following steps:
- Log on to the instance for which the restore operation has failed.
Ensure that the user account that you use to connect has administrative privileges on the instance.
- Run the following command to unmount the application disk cleanly:
# sudo umount /<application_diskmount>
Here, <application_diskmount> is the original application disk mount path on the instance.
If you see a
"device is busy"message, wait for some time and then try the umount command again. - From the NetBackup UI, trigger the disk-level restore operation again.
In general, if you want to detach the original application disks from the instance, use the following process for restore:
First take a disk-level snapshot of the instance.
After the snapshot is created successfully, manually detach the disk from the instance.
For example, if the instance is in the AWS cloud, use the AWS Management Console and edit the instance to detach the data disk. Ensure that you save the changes to the instance.
Log on to the instance using an administrative user account and then run the following command:
# sudo umount /<application_diskmount>
If you see a
"device is busy"message, wait for some time and then try the umount command again.Now trigger a disk-level restore operation from the NetBackup UI.