NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Meeting system requirements
- Snapshot Manager host sizing recommendations
- Snapshot Manager extension sizing recommendations
- Creating an instance or preparing the host to install Snapshot Manager
- Installing container platform (Docker, Podman)
- Creating and mounting a volume to store Snapshot Manager data
- Verifying that specific ports are open on the instance or physical host
- Preparing Snapshot Manager for backup from snapshot jobs
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Before you begin installing Snapshot Manager extensions
- Downloading the Snapshot Manager extension
- Installing the Snapshot Manager extension on a VM
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- Install extension using the Kustomize and CR YAMLs
- Managing the extensions
- NetBackup Snapshot Manager cloud plug-ins
- NetBackup Snapshot Manager application agents and plug-ins
- About the installation and configuration process
- Installing and configuring Snapshot Manager agent
- Configuring the Snapshot Manager application plug-in
- Configuring an application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- NetBackup protection plan
- Configuring VSS to store shadow copies on the originating drive
- Additional steps required after restoring an AWS RDS database instance
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- Uninstalling NetBackup Snapshot Manager
- Preparing to uninstall Snapshot Manager
- Backing up Snapshot Manager
- Unconfiguring Snapshot Manager plug-ins
- Unconfiguring Snapshot Manager agents
- Removing the Snapshot Manager agents
- Removing Snapshot Manager from a standalone Docker host environment
- Removing Snapshot Manager extensions - VM-based or managed Kubernetes cluster-based
- Restoring Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- Troubleshooting Snapshot Manager
- SQL snapshot or restore and granular restore operations fail if the Windows instance loses connectivity with the Snapshot Manager host
- Disk-level snapshot restore fails if the original disk is detached from the instance
- Discovery is not working even after assigning system managed identity to the control node pool
- Performance issue with GCP backup from snapshot
- Post migration on host agents fail with an error message
- File restore job fails with an error message
Disk-level snapshot restore fails if the original disk is detached from the instance
This issue occurs if you are performing a disk-level snapshot restore to the same location.
When you trigger a disk-level snapshot restore to the same location, NetBackup first detaches the existing original disk from the instance, creates a new volume from the disk snapshot, and then attaches the new volume to the instance. The original disk is automatically deleted after the restore operation is successful.
However, if the original disk whose snapshot is being restored is manually detached from the instance before the restore is triggered, the restore operation fails.
You may see the following message on the NetBackup UI:
Request failed unexpectedly: [Errno 17] File exists: '/<app.diskmount>'
The NetBackup coordinator logs contain messages similar to the following:
flexsnap.coordinator: INFO - configid : <app.snapshotID> status changed to
{u'status': u'failed', u'discovered_time': <time>, u'errmsg': u'
Could not connect to <application> server localhost:27017:
[Errno 111]Connection refused'}Workaround:
If the restore has already failed in the environment, you may have to manually perform a disk cleanup first and then trigger the restore job again.
Perform the following steps:
- Log on to the instance for which the restore operation has failed.
Ensure that the user account that you use to connect has administrative privileges on the instance.
- Run the following command to unmount the application disk cleanly:
# sudo umount /<application_diskmount>
Here, <application_diskmount> is the original application disk mount path on the instance.
If you see a
"device is busy"message, wait for some time and then try the umount command again. - From the NetBackup UI, trigger the disk-level restore operation again.
In general, if you want to detach the original application disks from the instance, use the following process for restore:
First take a disk-level snapshot of the instance.
After the snapshot is created successfully, manually detach the disk from the instance.
For example, if the instance is in the AWS cloud, use the AWS Management Console and edit the instance to detach the data disk. Ensure that you save the changes to the instance.
Log on to the instance using an administrative user account and then run the following command:
# sudo umount /<application_diskmount>
If you see a
"device is busy"message, wait for some time and then try the umount command again.Now trigger a disk-level restore operation from the NetBackup UI.