NetBackup™ Deployment Guide for Azure Kubernetes Services (AKS) Cluster
- Introduction to NetBackup on AKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- About primary server CR and media server CR
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from AKS
- Uninstalling Snapshot Manager
- Troubleshooting
- Appendix A. CR template
Taint, Toleration, and Node affinity related issues in cpServer
If one of the following cpServer control pool pod is in pending state, then perform the steps that follow:
flexsnap-agent, flexsnap-api-gateway, flexsnap-certauth, flexsnap-coordinator, flexsnap-idm, flexsnap-nginx, flexsnap-notification, flexsnap-scheduler, flexsnap-, flexsnap-, flexsnap-fluentd-, flexsnap-fluentd
Obtain the pending pod's toleration and affinity status using the following command:
kubectl get pods <pod name>
Check if the node-affinity and tolerations of pod are matching with:
fields listed in
or in theenvironment.yaml
file.taint and label of node pool, mentioned in
or in theenvironment.yaml
file.
If all the above fields are correct and matching and still the control pool pod is in pending state, then the issue may be due to all the nodes in nodepool running at maximum capacity and cannot accommodate new pods. In such case the noodpool must be scaled properly.
If one of the following cpServer data pool pod is in pending state, then perform the steps that follow:
flexsnap-listener,flexsnap-workflow,flexsnap-datamover
Obtain the pending pod's toleration and affinity status using the following command:
kubectl get pods <pod name>
Check if the node-affinity and tolerations of pod are matching with:
fields listed in
in theenvironment.yaml
file.taint and label of node pool, mentioned in
in theenvironment.yaml
file.
If all the above fields are correct and matching and still the control pool pod is in pending state, then the issue may be due to all the nodes in nodepool running at maximum capacity and cannot accommodate new pods. In such case the noodpool must be scaled properly.
Obtain the pending pod's toleration and affinity status using the following command:
kubectl get pods <pod name>
Check if the node-affinity and tolerations of pod are matching with:
fields listed in
file.taint and label of node pool, mentioned in above values.
If all the above fields are correct and matching and still the control pool pod is in pending state, then the issue may be due to all the nodes in nodepool running at maximum capacity and cannot accommodate new pods. In such case the noodpool must be scaled properly.
If the nodes are configured with incorrect taint and label values, the the user can edit them using the following command:
az aks nodepool update \ --resource-group <resource_group> \ --cluster-name <cluster_name> \ --name <nodepool_name> \ --node-taints <key>=<value>:<effect> \ --no-wait
az aks nodepool update \ --resource-group <resource_group> \ --cluster-name <cluster_name> \ --name <cluster_name> \ --labels <key>=<value>