NetBackup™ Deployment Guide for Azure Kubernetes Services (AKS) Cluster
- Introduction to NetBackup on AKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- Preparing the environment for NetBackup installation on AKS
- Recommendations of NetBackup deployment on AKS
- Limitations of NetBackup deployment on AKS
- About primary server CR and media server CR
- Monitoring the status of the CRs
- Updating the CRs
- Deleting the CRs
- Configuring NetBackup IT Analytics for NetBackup deployment
- Managing NetBackup deployment using VxUpdate
- Migrating the node pool for primary or media servers
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from AKS
- Uninstalling Snapshot Manager
- Troubleshooting
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Pod restart failure due to liveness probe time-out
- Socket connection failure
- Resolving an invalid license key issue
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Data migration unsuccessful even after changing the storage class through the storage yaml file
- Host validation failed on the target host
- Primary pod is in pending state for a long duration
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Host mapping conflict in NetBackup
- NetBackup messaging queue broker take more time to start
- Local connection is getting treated as insecure connection
- Issue with capacity licensing reporting which takes longer time
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Primary pod goes in non-ready state
- Appendix A. CR template
Resolving an issue where the primary server or media server deployment does not proceed
primary server or media server deployment does not proceed even if in primary server or media server CR spec and no other child resources are created. It is possible that the Config-Checker job has failed some of the configuration checks.
To resolve an issue where the primary server or media server deployment does not proceed
- Check the status of Config-Checker Configcheckerstatus mentioned in primary server or media server CR status using the kubectl describe <PrimaryServer/MediaServer> <CR name> -n <namespace> command.
If the state is failed, check the Config-Checker pod logs.
- Retrieve the Config-Checker pod logs using the kubectl logs <config-checker-pod-name> -n <operator-namespace> command.
Config-Checker pod name can be in the following format:
<serverType>-configchecker-<configcheckermode>-randomID, for example if its Config-Checker for primary server with configcheckermode = default, pod name isprimary-configcehcker-default-dhg34. - Depending on the error in the pod logs, perform the required steps and edit the environment CR to resolve the issue.
- Data migration jobs create the pods that run before deployment of primary server. Data migration pod exist after migration for one hour only if data migration job failed. The logs for data migration execution can be checked using the following command:
kubectl logs <migration-pod-name> -n <netbackup-environment-namespace>
User can copy the logs to retain them even after job pod deletion using the following command:
kubectl logs <migration-pod-name> -n <netbackup-environment-namespace> > jobpod.log