NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Monitoring the application health
Kubernetes Liveness and Readiness probes are used to monitor and control the health of the NetBackup primary server and media server pods. The probes collectively also called as health probes, keep checking the availability and readiness of the pods, and take designated actions in case of any issues. The kubelet uses liveness probes to know when to restart a container, and readiness probes to know when a container is ready. For more information, refer to the Kubernetes documentation.
Configure Liveness, Readiness and Startup Probes | Kubernetes
The health probes monitor the following for the NetBackup deployment:
Mount directories are present for the data/catalog at
/mnt/nbdataand the log volume at/mnt/nblogs.bp.confis present at/usr/openv/netbackupNetBackup services are running as expected.
Following table describes the actions and time intervals configured for the probes:
Table:
Action | Description | Probe name | Primary server (seconds) | Media server (seconds) | Request router (seconds) |
|---|---|---|---|---|---|
Initial delay | This is the delay that tells kubelet to wait for a given number of seconds before performing the first probe. | Readiness Probe | 120 | 0 | 60 |
Liveness Probe | 300 | 90 | 120 | ||
Periodic execution time | This action specifies that kubelet should perform a probe every given number of seconds. | Readiness Probe | 30 | 30 | 60 |
Liveness Probe | 90 | 90 | 60 | ||
Threshold for failure retries | This action specifies that kubelet should retry the probe for given number of times in case a probe fails, and then restart a container. | Readiness Probe | 1 | 1 | 1 |
Liveness Probe | 5 | 5 | 5 |
Heath probes are run using the nb-health command. If you want to manually run the nb-health command, the following options are available:
This option disables the health check that will mark pod as not ready (0/1).
This option enables the already disabled health check in the pod. This marks the pod in ready state(1/1) again if all the NetBackup health checks are passed.
This option deactivates the health probe functionality in pod. Pod remains in ready state(1/1). This will avoid pod restarts due to health probes like liveness, readiness probe failure. This is the temporary step and not recommended to use in usual case.
This option activates the health probe functionality that has been deactivated earlier using the option.
You can manually disable or enable the probes if required. For example, if for any reason you need to exec into the pod and restart the NetBackup services, the health probes should be disabled before restarting the services, and then they should be enabled again after successfully restarting the NetBackup services. If you do not disable the health probes during this process, the pod may restart due to the failed health probes.
Note:
It is recommended to disable the health probes only temporarily for troubleshooting purposes. When the probes are disabled, the web UI is not accessible in case of the primary server pod, and the media server pods cannot be scaled up. Then the health probes must be enabled again to successfully run NetBackup.
To disable or enable the health probes
- Execute the following command in the Primary or media server pod as required:
kubectl exec -it -n <namespace> <primary/media-server-pod-name> -- /bin/bash
- To disable the probes, run the /opt/veritas/vxapp-manage/nb-health disable command. Then the pod goes into the not ready (0/1) state.
- To enable the probes, run the "/opt/veritas/vxapp-manage/nb-health enable" command. Then the pod will be back into the ready (1/1) state.
You can check pod events in case of probe failures. To get more details use the kubectl describe <primary/media-pod-name> -n <namesapce> command.
To disable or enable the request router health probes
- Execute the following command in the request router pod as required:
kubectl exec -it -n <namespace> <request-router-pod-name> -- /bin/bash
- To disable the probes, run the /opt/veritas/vxapp-manage/health.sh disable command. Then the pod goes into the not ready (0/1) state.
- To enable the probes, run the /opt/veritas/vxapp-manage/health.sh enable command. Then the pod will be back into the ready (1/1) state.
You can check pod events in case of probe failures. To get more details use the kubectl describe -n command.