NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Meeting system requirements
- Snapshot Manager host sizing recommendations
- Snapshot Manager extension sizing recommendations
- Creating an instance or preparing the host to install Snapshot Manager
- Installing container platform (Docker, Podman)
- Creating and mounting a volume to store Snapshot Manager data
- Verifying that specific ports are open on the instance or physical host
- Preparing Snapshot Manager for backup from snapshot jobs
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Before you begin installing Snapshot Manager extensions
- Downloading the Snapshot Manager extension
- Installing the Snapshot Manager extension on a VM
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- Install extension using the Kustomize and CR YAMLs
- Managing the extensions
- NetBackup Snapshot Manager cloud plug-ins
- NetBackup Snapshot Manager application agents and plug-ins
- About the installation and configuration process
- Installing and configuring Snapshot Manager agent
- Configuring the Snapshot Manager application plug-in
- Configuring an application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- NetBackup protection plan
- Configuring VSS to store shadow copies on the originating drive
- Additional steps required after restoring an AWS RDS database instance
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- Uninstalling NetBackup Snapshot Manager
- Preparing to uninstall Snapshot Manager
- Backing up Snapshot Manager
- Unconfiguring Snapshot Manager plug-ins
- Unconfiguring Snapshot Manager agents
- Removing the Snapshot Manager agents
- Removing Snapshot Manager from a standalone Docker host environment
- Removing Snapshot Manager extensions - VM-based or managed Kubernetes cluster-based
- Restoring Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- Troubleshooting Snapshot Manager
- SQL snapshot or restore and granular restore operations fail if the Windows instance loses connectivity with the Snapshot Manager host
- Disk-level snapshot restore fails if the original disk is detached from the instance
- Discovery is not working even after assigning system managed identity to the control node pool
- Performance issue with GCP backup from snapshot
- Post migration on host agents fail with an error message
- File restore job fails with an error message
Migrate and upgrade Snapshot Manager on RHEL 8.6 or 8.4
Perform the following steps to migrate Snapshot Manager 10.0 or 10.0.0.1 from your RHEL 7.x host to the new RHEL 8.6 or 8.4 host.
To upgrade Snapshot Manager in docker environment
- Download the Snapshot Manager upgrade installer.
Example:
NetBackup_SnapshotManager_<version>.tar.gz - Un-tar the image file and list the contents:
# ls NetBackup_SnapshotManager_10.1.x.x.xxxx.tar.gz netbackup-flexsnap-10.1.x.x.xxxx.tar.gz flexsnap_preinstall.sh
- Run the following command to prepare the Snapshot Manager host for installation:
# sudo ./flexsnap_preinstall.sh
- Upgrade Snapshot Manager by running the following command:
# podman run -it --rm -u 0 -v /cloudpoint:/cloudpoint -v /run/podman/podman.sock:/run/podman/podman.sock veritas/flexsnap-deploy:<new_version> installFor an unattended installation, use the following command:
# podman run -it --rm -u 0 -v /cloudpoint:/cloudpoint -v /run/podman/podman.sock:/run/podman/podman.sock veritas/flexsnap-deploy:<new_version> install -yHere, new_version represents the Snapshot Manager version you are upgrading to.
The -y option passes an approval for all the subsequent installation prompts and allows the installer to proceed in a non-interactive mode.
Note:
Ensure that you enter the command without any line breaks.
The installer first loads the individual service images and then launches them in their respective containers.
- (Optional) Run the following command to remove the previous version images.
# podman rmi -f <imagename>:<oldimage_tagid>
- To verify that the new Snapshot Manager version is installed successfully:
See Verifying that Snapshot Manager is installed successfully.
To migrate Snapshot Manager in Podman environment
- On the RHEL 7.x host, verify that there are no protection policy snapshots or other operations in progress and then stop Snapshot Manager by running the following command:
# sudo docker run -it --rm -v /cloudpoint:/cloudpoint -v /var/run/docker.sock:/var/run/docker.sock veritas/flexsnap-deploy:<current_version> stopHere, current_version represents the currently installed Snapshot Manager version.
Example:
# sudo docker run -it --rm -v /cloudpoint:/cloudpoint -v /var/run/docker.sock:/var/run/docker.sock veritas/flexsnap-deploy:9.1.0.0.9349 stopNote:
This is a single command. Ensure that you enter the command without any line breaks.
The Snapshot Manager containers are stopped one by one. Messages similar to the following appear on the command line:
Stopping the services Stopping container: flexsnap-core.8a51aac1848c404ab61e4625d7b88703 ...done Stopping container: flexsnap-core-long-15 ...done Stopping container: flexsnap-core-long-14 ...done Stopping container: flexsnap-core-long-13 ...done Stopping container: flexsnap-core-long-12 ...done Stopping container: flexsnap-core-long-11 ...done Stopping container: flexsnap-core-long-10 ...done Stopping container: flexsnap-core-long-9 ...done Stopping container: flexsnap-core-long-8 ...done Stopping container: flexsnap-core-long-7 ...done Stopping container: flexsnap-core-long-6 ...done Stopping container: flexsnap-core-long-5 ...done Stopping container: flexsnap-core-long-4 ...done Stopping container: flexsnap-core-long-3 ...done Stopping container: flexsnap-core-long-2 ...done Stopping container: flexsnap-core-long-1 ...done Stopping container: flexsnap-core-long-0 ...done Stopping container: flexsnap-core-15 ...done Stopping container: flexsnap-core-14 ...done Stopping container: flexsnap-core-13 ...done Stopping container: flexsnap-core-12 ...done Stopping container: flexsnap-core-11 ...done Stopping container: flexsnap-core-10 ...done Stopping container: flexsnap-core-9 ...done Stopping container: flexsnap-core-8 ...done Stopping container: flexsnap-core-7 ...done Stopping container: flexsnap-core-6 ...done Stopping container: flexsnap-core-5 ...done Stopping container: flexsnap-core-4 ...done Stopping container: flexsnap-core-3 ...done Stopping container: flexsnap-core-2 ...done Stopping container: flexsnap-core-1 ...done Stopping container: flexsnap-core-0 ...done Stopping container: flexsnap-nginx ...done Stopping container: flexsnap-core ...done Stopping container: flexsnap-core ...done Stopping container: flexsnap-scheduler ...done Stopping container: flexsnap-idm ...done Stopping container: flexsnap-core ...done Stopping container: flexsnap-core ...done Stopping container: flexsnap-core ...done Stopping container: flexsnap-api-gateway ...done Stopping container: flexsnap-certauth ...done Stopping container: flexsnap-rabbitmq ...done Stopping container: flexsnap-mongodb ...done Stopping container: flexsnap-fluentd ...done
Wait for all the Snapshot Manager containers to be stopped and then proceed to the next step.
- Migrate the Snapshot Manager configuration data to the RHEL 8.6 or 8.4 host:
If you have created a new system with RHEL 8.6 or 8.4:
Run the following command to unmount /cloudpoint from the current host.
# umount /cloudpoint
Detach the data disk that was mounted on /cloudpoint mountpoint.
Note:
For detailed instructions to detach or attach the data disks, follow the documentation provided by your cloud or storage vendor.
On the RHEL8.6 or 8.4 host, run the following commands to create and mount the disk:
# mkdir /cloudpoint
# mount /dev/<diskname> /cloudpoint
For vendor-specific details
See Creating and mounting a volume to store Snapshot Manager data.
If you have upgraded from RHEL 7.x to RHEL 8.6 or 8.4, copy the /cloudpoint mountpoint data from RHEL 7.x system and move it to the RHEL8.6 or 8.4 system under /cloudpoint folder.
This concludes the Snapshot Manager migration process.
After migration, install the
new_versionon the new host by following the steps mentioned in the To upgrade Snapshot Manager in docker environment. - During migration process, if Snapshot Manager is migrated to another system or IP address is changed, then regenerate the certificates as follows:
Stop the Snapshot Manager services using the following command:
[root@ip-172-31-24-178 ec2-user]# podman run -it --rm --privileged -v /cloudpoint:/cloudpoint -v /run/podman/podman.sock:/run/podman/podman.sock veritas/flexsnap-deploy:10.0.0.9818 stop
Regenerate the certificates using the following command:
/cloudpoint/scripts/cp_regenerate_certs.sh -i <CP_IP_ADDRESS> -h <CP_HOSTNAME>
Setting up certificate authority ...done Generating certificates for servers ...done Generating certificates for clients ...done Adding MongoDB and RabbitMQ certificate to the trust store ...[Storing /cloudpoint/keys/idm_store] [Storing /cloudpoint/keys/flexsnap-idm_store] done Creating symlinks for nginx certificates ...done
Start the Snapshot Manager services using the following command:
[root@ip-172-31-24-178 ec2-user]# podman run -it --rm --privileged -v /cloudpoint:/cloudpoint -v /run/podman/podman.sock:/run/podman/podman.sock veritas/flexsnap-deploy:10.0.0.9818 start
- Depending on the following appropriate scenario, update the
/cloudpoint/openv/netbackup/bp.conffile to update the value of CLIENT_NAME to new Snapshot Manager IP/hostname.If IP address does not change, then edit the Snapshot Manager server entry and provide a reissue token generated for the Snapshot Manager host.
If IP address changes, then disable the previous Snapshot Manager host, and add a Snapshot Manager host with new IP address. Then perform the following steps:
Revoke the certificate of previous Snapshot Manager host.
Add the mapping of new Snapshot Manager host IP/hostname into the previous Snapshot Manager host using the host mappings.
Generate reissue token by selecting previous Snapshot Manager host, then use that token to edit the new Snapshot Manager host. Old Snapshot Manager host Certificate entry and Host Mapping would be replaced.
- After migrating Snapshot Manager to a RHEL 8.6 or 8.4 host, perform the following steps to upgrade Snapshot Manager to 10.1.
- This concludes the migration and upgrade process for Snapshot Manager. Verify that your Snapshot Manager configuration settings and data are preserved as is.