NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Meeting system requirements
- Snapshot Manager host sizing recommendations
- Snapshot Manager extension sizing recommendations
- Creating an instance or preparing the host to install Snapshot Manager
- Installing container platform (Docker, Podman)
- Creating and mounting a volume to store Snapshot Manager data
- Verifying that specific ports are open on the instance or physical host
- Preparing Snapshot Manager for backup from snapshot jobs
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Before you begin installing Snapshot Manager extensions
- Downloading the Snapshot Manager extension
- Installing the Snapshot Manager extension on a VM
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- Install extension using the Kustomize and CR YAMLs
- Managing the extensions
- NetBackup Snapshot Manager cloud plug-ins
- NetBackup Snapshot Manager application agents and plug-ins
- About the installation and configuration process
- Installing and configuring Snapshot Manager agent
- Configuring the Snapshot Manager application plug-in
- Configuring an application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- NetBackup protection plan
- Configuring VSS to store shadow copies on the originating drive
- Additional steps required after restoring an AWS RDS database instance
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- Uninstalling NetBackup Snapshot Manager
- Preparing to uninstall Snapshot Manager
- Backing up Snapshot Manager
- Unconfiguring Snapshot Manager plug-ins
- Unconfiguring Snapshot Manager agents
- Removing the Snapshot Manager agents
- Removing Snapshot Manager from a standalone Docker host environment
- Removing Snapshot Manager extensions - VM-based or managed Kubernetes cluster-based
- Restoring Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
- Troubleshooting Snapshot Manager
- SQL snapshot or restore and granular restore operations fail if the Windows instance loses connectivity with the Snapshot Manager host
- Disk-level snapshot restore fails if the original disk is detached from the instance
- Discovery is not working even after assigning system managed identity to the control node pool
- Performance issue with GCP backup from snapshot
- Post migration on host agents fail with an error message
- File restore job fails with an error message
Prerequisites to install the extension on a managed Kubernetes cluster in GCP
The Snapshot Manager cloud-based extension can be deployed on a managed Kubernetes cluster in GCP for scaling the capacity of the Snapshot Manager host to service a large number of requests concurrently.
The GCP managed Kubernetes cluster must be already deployed with appropriate network and configuration settings. The cluster must be able to communicate with Snapshot Manager and the filestore.
Note:
The Snapshot Manager and all the cluster nodepools must be in the same zone.
For more information, see Google Kubernetes Engine overview.
Use an existing container registry or create a new one, and ensure that the managed Kubernetes cluster has access to pull images from the container registry.
A dedicated nodepool for Snapshot Manager workloads must be created with or without enabled in the GKE cluster. The autoscaling feature allows your nodepool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.
Snapshot Manager extension images (flexsnap-core, flexsnap-datamover , flexsnap-deploy, flexsnap-fluentd) must be uploaded to the container registry.
Prepare the host and the managed Kubernetes cluster in GCP
Select the Snapshot Manager image supported on Ubuntu or RHEL system that meets the Snapshot Manager installation requirements and create a host.
See Creating an instance or preparing the host to install Snapshot Manager.
Verify that the port 5671 is open on the main Snapshot Manager host.
See Verifying that specific ports are open on the instance or physical host.
Install a docker or podman container platform on the host and start the container service.
Prepare the Snapshot Manager host to access Kubernetes cluster within your GCP environment.
Install gcloud CLI. For more information, see Install the gcloud CLI.
Install Kubernetes CLI.
For more information, refer to the following documents:
Create a gcr container registry or use the existing one if available, to which the Snapshot Manager images will be uploaded (pushed).
Run the gcloud init to set the account. Ensure that this account has the required permissions to configure the Kubernetes cluster.
For more information on the required permissions, see Installing the Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP. For more information on
gcloudcommand, refer to the following document:Connect to the cluster using the following command:
gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>
For more information, refer to Install kubectl and configure cluster access.
Create a namespace for Snapshot Manager from the command line on host system:
# kubectl create namespace <namespace-name>
# kubectl config set-context --current --namespace=<namespace-name>
Note:
User can provide any namespace name, it must be like
cloudpoint-system.
Create a persistent volume
Reuse existing filestore.
Mount the filestore and create a directory (for example, dir_for_this_cp) only to be used by Snapshot Manager.
Create a file (for example,
PV_file.yaml) with the content as follows:apiVersion: v1 kind: PersistentVolume metadata: name: <name of the pv> spec: capacity: storage: <size in GB> accessModes: - ReadWriteMany nfs: path: <path to the dir created above> server: <ip of the filestore>Run the following command to setup Persistent Volume:
kubectl apply -f <PV_file.yaml>
For more information about using file store with kubernetes cluster, refer to Accessing file shares from Google Kubernetes Engine clusters.