NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for Primary and Media servers
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Telemetry reporting
Telemetry reporting entries for the NetBackup deployment on AKS/EKS are indicated with the text.
By default, the telemetry data is saved at the
/var/veritas/nbtelemetry/location. The default location will not persisted during the pod restarts.If you want to save telemetry data to persisted location, then execute the kubectl exec -it -n <namespace> <primary/media_server_pod_name> - /bin/bash command in the pod using the and execute telemetry command using /usr/openv/netbackup/bin/nbtelemetry with
--dataset-path=DESIRED_PATHoption.Exec into the primary server pod using the following command:
kubectl exec -it -n <namespace> <primary/media_server_pod_name> -- /bin/bash
Execute telemetry command using /usr/openv/netbackup/bin/nbtelemetry with
--dataset-path=DESIRED_PATHNote:
Here
DESIRED_PATHmust be/mnt/nbdata or /mnt/nblogs.