NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster
- Introduction to NetBackup on EKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- Preparing the environment for NetBackup installation on EKS
- Recommendations of NetBackup deployment on EKS
- Limitations of NetBackup deployment on EKS
- About primary server CR and media server CR
- Monitoring the status of the CRs
- Updating the CRs
- Deleting the CRs
- Configuring NetBackup IT Analytics for NetBackup deployment
- Managing NetBackup deployment using VxUpdate
- Migrating the node group for primary or media servers
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from EKS
- Uninstalling Snapshot Manager
- Troubleshooting
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Pod restart failure due to liveness probe time-out
- Socket connection failure
- Resolving an invalid license key issue
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Resolving the primary server connection issue
- Primary pod is in pending state for a long duration
- Host mapping conflict in NetBackup
- NetBackup messaging queue broker take more time to start
- Local connection is getting treated as insecure connection
- Issue with capacity licensing reporting which takes longer time
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Wrong EFS ID is provided in environment.yaml file
- Primary pod is in ContainerCreating state
- Webhook displays an error for PV not found
- Appendix A. CR template
Wrong EFS ID is provided in environment.yaml file
Following error message is displayed when wrong EFS ID is provided in
environment.yamlfile:
"samples/environment.yaml": admission webhook "environment2-validating-webhook.netbackup.veritas.com" denied the request: Environment change rejected by validating webhook: EFS ID provided for Catalog storage is not matching with EFS ID of already created persistent volume for Primary servers Catalog volume. Old EFS ID fs-0bf084568203f1c27 : New EFS ID fs-0bf084568203f1c29
Above error can appear due to the following reasons:
During upgrade, if EFS ID provided is different from the EFS ID used in the previous version deployment.
During fresh deployment, if user manually creates PV and PVC with EFS ID and provides different EFS ID in
environment.yamlfile.
To resolve this issue, perform the following:
- Identify the correct EFS ID used for PV and PVC.
Previously used EFS ID can be retrieved from PV and PVC by using the following steps:
kubectl get pvc -n <namespace>
From the output, copy the name of catalog PVC which is of the following format:
catalog-<resource name prefix>-primary-0
Describe catalog PVC using the following command:
kubectl describe pvc <pvc name> -n <namespace>
Note down the value of Volume field from the output.
Describe PV using the following command:
kubectl describe pv <value of Volume obtained from above step>
Note down the value of VolumeHandle field from the output. This is the EFS ID used previously.
- Provide correct EFS ID in the
environment.yamlfile and apply theenvironment.yamlusing the following command:kubectl apply -f environment.yaml