NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Cloud LSU disaster recovery
Scenario 1: MSDP Scaleout and its data is lost and the NetBackup primary server remains unchanged and works well
- Redeploy MSDP Scaleout on a cluster by using the same CR parameters and NetBackup re-issue token.
- If the LSU cloud alias does not exist, you can use the following command to add it.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>
When MSDP Scaleout is up and running, re-use the cloud LSU on NetBackup primary server.
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>
Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in the previous MSDP Scaleout deployment.
Configuration file template:
V7.5 "operation" "reuse-lsu-cloud" string V7.5 "lsuName" "LSUNAME" string V7.5 "cmsCredName" "XXX" string V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string V7.5 "lsuCloudBucketName" "XXX" string V7.5 "lsuCloudBucketSubName" "XXX" string V7.5 "lsuKmsServerName" "XXX" string
Note:
For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.
- On the first MSDP Engine of MSDP Scaleout, run the following command for each cloud LSU:
sudo -E -u msdpsvc /usr/openv/pdde/pdcr/bin/cacontrol --catalog clouddr <LSUNAME>
- Restart the MSDP services in the MSDP Scaleout.
Option 1: Manually delete all the MSDP engine pods.
kubectl delete pod <sample-engine-pod> -n <sample-cr-namespace>
Option 2: Stop MSDP services in each MSDP engine pod. MSDP service starts automatically.
kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- /usr/openv/pdde/pdconfigure/pdde stop
Note:
After this step, the MSDP storge server status may appear as down on the NetBackup primary server. The status changes to up automatically after the MSDP services are restarted in a few minutes.
If the status does not change, run the following command on the primary server to update MSDP storage server status manually:
/usr/openv/volmgr/bin/tpconfig -update -storage_server <storage-server-name> -stype PureDisk -sts_user_id <storage-server-user-name> -password <storage-server-password>
- If MSDP S3 service is configured, restart MSDP S3 service after MSDP services are restarted.
kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- systemctl restart pdde-s3srv
Scenario 2: MSDP Scaleout and its data is lost and the NetBackup primary server was destroyed and is re-installed
- Redeploy MSDP Scaleout on a cluster by using the same CR parameters and new NetBackup token.
- When MSDP Scaleout is up and running, reuse the cloud LSU on NetBackup primary server.
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>
Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in previous MSDP Scaleout deployment.
Configuration file template:
V7.5 "operation" "reuse-lsu-cloud" string V7.5 "lsuName" "LSUNAME" string V7.5 "cmsCredName" "XXX" string V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string V7.5 "lsuCloudBucketName" "XXX" string V7.5 "lsuCloudBucketSubName" "XXX" string V7.5 "lsuKmsServerName" "XXX" string
If KMS is enabled, setup KMS server and import the KMS keys.
If the LSU cloud alias does not exist, you can use the following command to add it.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>
Note:
For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.
- On the first MSDP Engine of MSDP Scaleout, run the following command for each cloud LSU:
sudo -E -u msdpsvc /usr/openv/pdde/pdcr/bin/cacontrol --catalog clouddr <LSUNAME>
- Restart the MSDP services in the MSDP Scaleout.
Option 1: Manually delete all the MSDP engine pods.
kubectl delete <sample-engine-pod> -n <sample-cr-namespace>
Option 2: Stop MSDP services in each MSDP engine pod.
kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- /usr/openv/pdde/pdconfigure/pdde stop
Note:
After this step, the MSDP storge server status may appear as down on the NetBackup primary server. The status changes to up automatically after the MSDP services are restarted in a few minutes.
If the status does not change, run the following command on the primary server to update MSDP storage server status manually:
/usr/openv/volmgr/bin/tpconfig -update -storage_server <storage-server-name> -stype PureDisk -sts_user_id <storage-server-user-name> -password <storage-server-password>
- If MSDP S3 service is configured, restart MSDP S3 service after MSDP services are restarted.
kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- systemctl restart pdde-s3srv
- Create disk pool for the cloud LSU on NetBackup server.
- Do two-phase image importing.
See the NetBackup Administrator's Guide, Volume I
For information about other DR scenarios, see NetBackup Deduplication Guide.