NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster
- Introduction to NetBackup on EKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- Preparing the environment for NetBackup installation on EKS
- Recommendations of NetBackup deployment on EKS
- Limitations of NetBackup deployment on EKS
- About primary server CR and media server CR
- Monitoring the status of the CRs
- Updating the CRs
- Deleting the CRs
- Configuring NetBackup IT Analytics for NetBackup deployment
- Managing NetBackup deployment using VxUpdate
- Migrating the node group for primary or media servers
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from EKS
- Uninstalling Snapshot Manager
- Troubleshooting
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Pod restart failure due to liveness probe time-out
- Socket connection failure
- Resolving an invalid license key issue
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Resolving the primary server connection issue
- Primary pod is in pending state for a long duration
- Host mapping conflict in NetBackup
- NetBackup messaging queue broker take more time to start
- Local connection is getting treated as insecure connection
- Issue with capacity licensing reporting which takes longer time
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Wrong EFS ID is provided in environment.yaml file
- Primary pod is in ContainerCreating state
- Webhook displays an error for PV not found
- Appendix A. CR template
Allocating static PV for Primary and Media pods
When you want to use a disk with specific performance parameters, you can statically create the PV and PVC. You must allocate static PV and PVC before deploying the NetBackup server for the first time.
To allocate static PV for Media and Primary pods
- Create storage class in cluster.
See How does the Config-Checker utility work.
This newly created storage class name is used while creating PV and PVC's and should be mentioned for Catalog, Log, Data volume in the environment CR mediaServer section at the time of deployment.
For more information on creating the storage class, see Storage class.
For example,
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp2-reclaim provisioner: kubernetes.io/aws-ebs reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: Immediate parameters: fsType: ext4 type: gp2
For more information about the static provisioning for EFS, see Static Provisioning.
- Calculate number of disks required.
The following persistent volumes are required by Media pods:
Data and Log volume disk per replica of media server.
Use the following format to form PVC names.
For primary server:
For media server
data-<resourceNamePrefix_of_media>-media-<media server replica number. Count starts from 0>
logs-<resourceNamePrefix_of_media>-media-<media server replica number. Count starts from 0>
Example 1
If user wants to deploy a media server with replica count 3.
Name of the Media PVC assuming resourceNamePrefix_of_media is testmedia.
For this scenario, you must create total 8 disks, 8 PV and 8 PVCs.
6 disks, 6 PV and 6 PVCs for media server.
For data of PrimaryServer:
For logs:
logs-testprimary-primary-0
Following will be the names for media server volumes
For data:
data-testmedia-media-0
data-testmedia-media-1
data-testmedia-media-2
For log:
logs-testmedia-media-0
logs-testmedia-media-1
logs-testmedia-media-2
Example 2
If user wants to deploy a media server with replica count 5
Names of the Media PVC assuming resourceNamePrefix_of_media is testmedia.
For this scenario, you must create 12 disks, 12 PV and 12 PVCs
10 disks, 10 PV and 10 PVCs for media server.
Following will be the names for primary server volumes
For data:
For logs:
Following will be the names for media server volumes
For data:
data-testmedia-media-0
data-testmedia-media-1
data-testmedia-media-2
data-testmedia-media-3
data-testmedia-media-4
For log:
logs-testmedia-media-0
logs-testmedia-media-1
logs-testmedia-media-2
logs-testmedia-media-3
logs-testmedia-media-4
- Create the required number of AWS EBS volumes and save the VolumeId of newly created volumes.
For more information on creating EBS volumes, see EBS volumes.
(For Primary Server volumes) Create the required number of EFS. User can use single EFS to mount catalog of primary. For example, VolumeHandle in PersistentVolume spec will be as follows:
<file_system_id>:/catalog
- Create PVs for each disks.
To create the PVs, specify the created storage class and VolumeId (ID of the EBS volumes received in step 3). The PV must be created using the claimRef field and provide PVC name for its corresponding namespace.
For example, if you are creating PV for catalog volume, storage required is 128GB and namespace is test. PVC named catalog-testprimary-primary-0 is linked to this PV when PVC is created in the namespace test.
apiVersion: v1 kind: PersistentVolume metadata: name: catalog spec: accessModes: - ReadWriteMany awsElasticBlockStore: fsType: xfs volumeID: aws://us-east-2c/vol-xxxxxxxxxxxxxxxxx capacity: storage: 128Gi persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: catalog-testprimary-primary-0 namespace: test - Create PVC with correct PVC name (step 2), storage class and storage.
For example,
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: catalog-testprimary-primary-0 namespace: test spec: storageClassName: gp2-retain accessModes: - ReadWriteMany resources: requests: storage: 128Gi
- Deploy the Operator.
- Use previously created storage class names for the volumes in primary section and mediaServers section in environment CR spec and deploy environment CR.