NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Getting EEB information from an image, a running container, or persistent data
To view the list of installed EEBs, run the nbbuilder script provided in the EEB file archive.
(AKS-specific): $ bash nbbuilder.sh -registry_name testregistry.azurecr.io -list_installed_eebs -nb_src_tag=10.3 -msdp_src_tag=18.0
(EKS-specific): $ bash nbbuilder.sh -registry_name <account id>.dkr.ecr.<region>.amazonaws.com -list_installed_eebs -nb_src_tag=10.3 -msdp_src_tag=18.0
Sample output:
Wed Feb 2 20:48:13 UTC 2022: Listing strings for EEBs installed in <account id>.dkr.ecr.<region>.amazonaws.com/netbackup/main:10.3. EEB_NetBackup_10.3Beta6_PET3980928_SET3992004_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992021_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992022_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992023_EEB1 EEB_NetBackup_10.3Beta6_PET3992020_SET3992019_EEB2 EEB_NetBackup_10.3Beta6_PET3980928_SET3992009_EEB2 EEB_NetBackup_10.3Beta6_PET3980928_SET3992016_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992017_EEB1 Wed Feb 2 20:48:13 UTC 2022: End Wed Feb 2 20:48:13 UTC 2022: Listing strings for EEBs installed in <account id>.dkr.ecr.<region>.amazonaws.com/uss-controller:18.0. EEB_MSDP_18.0_PET3980928_SET3992007_EEB1 EEB_MSDP_18.0_PET3992020_SET3992019_EEB2 EEB_MSDP_18.0_PET3980928_SET3992010_EEB2 Wed Feb 2 20:48:14 UTC 2022: End Wed Feb 2 20:48:14 UTC 2022: Listing strings for EEBs installed in <account id>.dkr.ecr.<region>.amazonaws.com/uss-engine:18.0. EEB_MSDP_18.0_PET3980928_SET3992006_EEB1 EEB_MSDP_18.0_PET3980928_SET3992023_EEB1 EEB_MSDP_18.0_PET3992020_SET3992019_EEB2 EEB_MSDP_18.0_PET3980928_SET3992009_EEB2 EEB_MSDP_18.0_PET3980928_SET3992010_EEB2 EEB_MSDP_18.0_PET3980928_SET3992018_EEB1 Wed Feb 2 20:48:14 UTC 2022: End Wed Feb 2 20:48:14 UTC 2022: Listing strings for EEBs installed in <account id>.dkr.ecr.<region>.amazonaws.com/uss-mds:18.0. EEB_MSDP_18.0_PET3980928_SET3992008_EEB1 EEB_MSDP_18.0_PET3992020_SET3992019_EEB2 EEB_MSDP_18.0_PET3980928_SET3992010_EEB2 Wed Feb 2 20:48:15 UTC 2022: End
Alternatively, if the nbbuilder script is not available, you can view the installed EEBs by executing the following command:
$ docker run --rm <image_name>:<image_tag> cat /usr/openv/pack/pack.summary
Sample output:
EEB_NetBackup_10.1Beta6_PET3980928_SET3992004_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992021_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992022_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992023_EEB1 EEB_NetBackup_10.3Beta6_PET3992020_SET3992019_EEB2 EEB_NetBackup_10.3Beta6_PET3980928_SET3992009_EEB2 EEB_NetBackup_10.3Beta6_PET3980928_SET3992016_EEB1 EEB_NetBackup_10.3Beta6_PET3980928_SET3992017_EEB1
To view all EEBs installed in a running container, run:
$ kubectl exec --stdin --tty <primary-pod-name> -n <namespace> -- cat /usr/openv/pack/pack.summary
Note:
The pack directory may be located in different locations in the uss-* containers. For example: /uss-controller/pack , /uss-mds/pack, /uss-proxy/pack.
To view a list of installed data EEBs from a running container, run:
$ kubectl exec --stdin --tty <primary-pod-name> -n <namespace> -- cat /mnt/nbdata/usr/openv/pack/pack.summary