Please enter search query.
Search <book_title>...
NetBackup™ Deployment Guide for Kubernetes Clusters
Last Published:
2025-03-18
Product(s):
NetBackup & Alta Data Protection (11.0)
- Introduction
- Section I. Configurations
- Prerequisites
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- MSDP Scaleout configuration
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Pod status field shows as pending
If the pod Status field shows Pending state, it indicates that Kubernetes is not able to schedule the pod. To check use the following command:
$ kubectl get all -n netbackup-operator-system
The output is something like:
NAME READY STATUS RESTARTS AGE pod/msdp-operator- controller-manager- 65d8fd7c4d-bsgms 2/2 Running 0 12m pod/netbackup- operator-controller- manager-6c9dc8d87f -pq8mr 0/2 Pending 0 15s
For more details use the following pod describe command:
$ kubectl describe pod/netbackup-operator-controller-manager-6c9dc8d87f-pq8mr -n netbackup-operator-system
The output is something like:
Type Reason Age Message ---- ------ ---- ------- Warning FailedScheduling 56s (x3 over 2m24s) 0/4 nodes are available:1 node(s) had taint {node- role.kubernetes. io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.
To resolve this issue, you can edit the operator deployment using the following command and verify the nodeSelector:
kubectl edit deployment netbackup-operator-controller-manager -n netbackup-operator-system