NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster
- Introduction to NetBackup on EKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- Preparing the environment for NetBackup installation on EKS
- Recommendations of NetBackup deployment on EKS
- Limitations of NetBackup deployment on EKS
- About primary server CR and media server CR
- Monitoring the status of the CRs
- Updating the CRs
- Deleting the CRs
- Configuring NetBackup IT Analytics for NetBackup deployment
- Managing NetBackup deployment using VxUpdate
- Migrating the node group for primary or media servers
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from EKS
- Uninstalling Snapshot Manager
- Troubleshooting
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Pod restart failure due to liveness probe time-out
- Socket connection failure
- Resolving an invalid license key issue
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Resolving the primary server connection issue
- Primary pod is in pending state for a long duration
- Host mapping conflict in NetBackup
- NetBackup messaging queue broker take more time to start
- Local connection is getting treated as insecure connection
- Issue with capacity licensing reporting which takes longer time
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Wrong EFS ID is provided in environment.yaml file
- Primary pod is in ContainerCreating state
- Webhook displays an error for PV not found
- Appendix A. CR template
Migration and updating of Snapshot Manager
Users can manually migrate Snapshot Manager registered with NetBackup to Kubernetes Service cluster environment by performing the following steps:
Disable Snapshot Manager from NetBackup.
Stop services on the Snapshot Manager VM.
Create and attach a volume to the Snapshot Manager VM:
Create a new volume from AWS cloud console in the same region where nodegroup of NetBackup is there.
Note:
Copy and save the volume ID which would be used later.
Attach the volume to Snapshot Manager VM. Create a file system and mount the volume:.
For example,
mkdir /mnt/test/ mkfs.ext4 /dev/xvdg mount /dev/xvdg /mnt/test/ mount | grep xvdg
To check the mount point, use the following command:
df -h
Copy the contents from
/cloudpoint/mongodbto the new volume.For example, cp -r /cloudpoint/mongodb/* /mnt/test/
Copy
flexsnap.confandbp.confconfiguration files to the VM from where cluster is accessible.Unmount and detach the volume from VM as follows:
To unmount: umount /mnt/test/
To detach navigate to the console and detach.
From Cluster Access VM perform the following steps:
Create configuration maps using the following command:
kubectl create cm --from-file=<path to flexsnap.conf > -n <environment namespace>
kubectl create cm nbuconf --from-file=<path to bp.conf> -n <environment namespace>
Create Snapshot Manager credential Secret using the following command:
kubectl create secret generic cp-creds --from-literal=username='<username>' --from-literal=password='<password>' -n <environment-namespace>
Create static PV and PVCs using the volume.
Create StorageClass volume for EBS:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gp2-reclaim parameters: fsType: ext4 type: gp2 provisioner: kubernetes.io/aws-ebs reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
Create StorageClass volume for EFS:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: <EFS ID> directoryPerms: "700" reclaimPolicy: Retain volumeBindingMode: Immediate
Note:
It is recommended to use a separate EFS for Snapshot Manager deployment and primary server catalog.
Create mongodb Persistent Volume and Persistent Volume Claim as follows:
apiVersion: v1 kind: PersistentVolume metadata: name: mongodb-pv <pv name> spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce awsElasticBlockStore: volumeID: aws://us-east-2a/vol-00de395edff9fa8fb <need to give in this format aws://region/volumeID > fsType: ext4 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-reclaim <storageclass name> volumeMode: Filesystem claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: <PVC name> namespace: <netbackup-environment> kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mongodb-pvc namespace: <netbackup-environment> spec: accessModes: - ReadWriteOnce storageClassName: gp2-reclaim <Storage class name> resources: requests: storage: 30Gi volumeName: mongodb-pv <PV name>
For more information, refer to Leveraging AWS EBS for Kubernetes Persistent Volumes.
Create mongodb Persistent Volume and Persistent Volume Claim by applying the above yamls file as follows:
kubectl apply -f <path_to_mongodb_pv.yaml> kubectl apply -f <path_to_mongodb_pvc.yaml> -n <environment-namespace>
Ensure that the newly created PVC is bound to the PV.
For example, kubectl get pvc -n <netbackup-environment> | grep mongodb
Edit the
environment.yamlfile to upgrade NetBackup (primary/media/MSDP) and add section for Snapshot Manager as follows in theenvironment.yamlfile:cpServer: - name: cp-cluster-deployment tag: <version> containerRegistry: <container registry credential: secretName: cp-creds networkLoadBalancer: annotations: ipAddr: <IP address> fqdn: <FQDN> storage: log: capacity: <log size> storageClassName: <EFS based storage class> data: capacity: <mongodb PV size> storageClassName: <mongodb PV storage class> nodeSelector: #controlPlane: #nodepool: <Control node pool> #labelKey: <Control node label key> #labelValue: <Control node label value> dataPlane: nodepool: <Data node pool> labelKey: <Data node label key> labelValue: <Data node label value>Apply the
environment.yamlfile using the following command:kubectl apply -f <path to environment.yaml> -n <environment-namespace>
Re-register the Snapshot Manger from WebUI using the edit option of existing Snapshot Manager entry and provide reissue token if the Snapshot Manager name (fqdn/ip) is same as VM deployment. For more information on reissuing the token, refer to the following section:
Note:
Automatic cloud provider additions would be skipped, if there was any AWS plugin addition present pre-migration. User can manually add the cloud with specific region if needed. Ensure that cluster region is configured with the plugin, as it is required to obtain the cluster details and correctly calculate the capability of Snapshot Manager.
After migration, duplicate plug-in entries might be displayed in the NetBackup Web UI. Manually delete these duplicated plugin information from the /usr/openv/var/global/CloudPoint_plugin.conf file.
User can update few parameters on the existing deployed Snapshot Manager by making changes in the section of environment.yaml file and apply it.
Only field can be changed in section of CR. For update operation to work, set the value of parameter to in the storage classes used.
The following table lists the parameters that can be modified during update of Snapshot Manager:
Parameters | Edit during update | Edit during upgrade |
|---|---|---|
name | No | No |
tag | No | Yes |
containerRegistry | No | Yes |
credential: secretName | No | No |
networkLoadBalancer: annotations | No | Yes |
networkLoadBalancer: fqdn | No | No |
networkLoadBalancer: ipAddr | No | Yes |
data.capacity | Yes | Yes |
data.storageClassName | No | No |
cpServer.NodeSelector.ControlPlane | No | Yes |
cpServer.NodeSelector.DataPlane | No | Yes |
proxySettings: vx_http_proxy | No | Yes |
proxySettings: vx_https_proxy | No | Yes |
proxySettings: vx_no_proxy | No | Yes |