NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Deployment
- Prerequisites for Kubernetes cluster configuration
- Deployment with environment operators
- Deploying NetBackup
- Preparing the environment for NetBackup installation on Kubernetes cluster
- Recommendations of NetBackup deployment on Kubernetes cluster
- Limitations of NetBackup deployment on Kubernetes cluster
- Primary and media server CR
- Configuring NetBackup IT Analytics for NetBackup deployment
- Managing NetBackup deployment using VxUpdate
- Migrating the cloud node for primary or media servers
- Deploying NetBackup using Helm charts
- Deploying MSDP Scaleout
- Deploying MSDP Scaleout
- Prerequisites for AKS
- Prerequisites for EKS
- Installing the docker images and binaries
- Initializing the MSDP operator
- Configuring MSDP Scaleout
- Using MSDP Scaleout as a single storage pool in NetBackup
- Configuring the MSDP cloud in MSDP Scaleout
- Using S3 service in MSDP Scaleout for AKS
- Enabling MSDP S3 service after MSDP Scaleout is deployed for AKS
- Deploying Snapshot Manager
- Section II. Monitoring and Management
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager
- Managing the Load Balancer service
- Managing MSDP Scaleout
- Performing catalog backup and recovery
- Section III. Maintenance
- MSDP Scaleout Maintenance
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an invalid license key issue
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Prerequisites
Ensure that the following prerequisites are met before proceeding with the deployment.
Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
To use this functionality, user must create the node pool/node group with the following detail:
Add a label with certain key value. For example key = nbpool, value = nbnodes
Add a taint with the same key and value which is used for label in above step with effect as NoSchedule.
For example, key = nbpool, value = nbnodes, effect = NoSchedule
Install Cert-Manager. You can use the following command to install the Cert-Manager:
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.0/cert-manager.yaml
For details, see https://cert-manager.io/docs/installation/
A workstation or VM running Linux with the following:
Configure kubectl to access the cluster.
Install Azure/AWS CLI to access Azure/AWS resources.
Configure docker to be able to push images to the container registry.
Free space of approximately 8.5GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the
/var/lib/dockerlocation so that the images can be loaded to the docker cache, before being pushed to the container registry.
AKS-specific
A Kubernetes cluster in Azure Kubernetes Service in AKS with multiple nodes. Using separate node pool is recommended for the NetBackup servers, MSDP Scaleout deployments and for different media server objects. It is required to have separate node pool for Snapshot Manager data plane.
Taints are set on the node pool while creating the node pool in the cluster. Tolerations are set on the pods.
Define storage class of and for primary and for media and MSDPX.
Enable AKS Uptime SLA. AKS Uptime SLA is recommended for a better resiliency. For information about AKS Uptime SLA and to enable it, see Azure Kubernetes Service (AKS) Uptime SLA.
Access to a container registry that the Kubernetes cluster can access, like an Azure Kubernetes Service Container Registry.
EKS-specific
A Kubernetes cluster in Amazon Elastic Kubernetes Service in EKS with multiple nodes. Using separate node group is recommended for the NetBackup servers, MSDP Scaleout deployments and for different media server objects. It is required to have separate node pool for Snapshot Manager data plane.
Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.
Access to a container registry that the Kubernetes cluster can access, like an Amazon Elastic Kubernetes Service Container Registry.
AWS network load balancer controller add-on must be installed for using network load balancer capabilities.
AWS EFS-CSI driver must be installed for statically provisioning the PV or PVC in EFS for primary server.
For more information on installing the load balancer add-on controller and EFS-CSI driver, See About the Load Balancer service.