NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Preparing the environment for NetBackup installation on Kubernetes cluster
- Prerequisites for Snapshot Manager (AKS/EKS)
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Prerequisites for deploying environment operators
- Prerequisites for using private registry
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Managing media server configurations in Web UI
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving issues when media server PVs are deleted
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Job remains in queue for long time
- Extracting logs if the nbwsapp or log-viewer pods are down
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Job remains in queue for long time
Job remains in queue for long time with 'Cloud scale media server is not available' reason as follows:
awaiting resource abc-stu. Waiting for resources. Reason: Cloud scale media server is not available., Media server: media1-media-0, Robot Type(Number): NONE(N/A), Media ID: N/A, Drive Name: N/A, Volume Pool: N/A, Storage Unit: default_stu_xyz.abc.com, Drive Scan Host: N/A, Disk Pool: default_dp_stu_xyz.abc.com, Disk Volume: PureDiskVolumeThe above issue occurs due to one of the following reason while creating STU:
While selecting media server, Manually select option is selected and specific elastic media server or primary server is selected explicitly.
While selecting media server, Allow NetBackup to automatically select option is selected and primary server as only media server is listed in media server list.
Workaround:
To resolve the issue, perform the following:
Edit the respective storage unit, if Manually select option is selected for media server. Change the option to Allow NetBackup to automatically select.
If non default storage server is used and while creating stu, Allow NetBackup to automatically select option is selected and primary server is listed in media server list as the only media server, then edit the respective storage server and add external or elastic media server in the media server list and remove the primary server.
Job remains in queue for long time with "Media server is currently not connected to master server" or "Disk media server is not active" due to the following reasons:
At least one elastic media server is 'Offline'.
Primary server is not present in Media server of default storage server when value is set to 0.
awaiting resource default_stu_abc.com. Waiting for resources. Reason: Media server is currently not connected to master server, Media server: media1-media-0, Robot Type(Number): NONE(N/A), Media ID: N/A, Drive Name: N/A, Volume Pool: NetBackup, Storage Unit: default_stu_abc.com, Drive Scan Host: N/A, Disk Pool: default_dp_nbux-abc.com, Disk Volume: PureDiskVolume
awaiting resource default_stu_abc.com. Waiting for resources. Reason: Disk media server is not active, Media server: media1-media-0, Robot Type(Number): NONE(N/A), Media ID: N/A, Drive Name: N/A, Volume Pool: NetBackup, Storage Unit: default_stu_abc.com, Drive Scan Host: N/A, Disk Pool: default_dp_abc.com, Disk Volume: PureDiskVolume
Workaround: Perform the following respective workaround depending on the reason of the issue:
Issue
Workaround
At least one elastic media server is 'Offline'
:
Change the value of to a value greater than the current value of and wait for new media server pod to be in state.
Then value of can be changed to original value of .
Use the following command to update the value of in mediaServer section:
kubectl edit environment <environment-cr-name> -n <namespace>
Save the changes.
:
If media server pod is not running, change the 'Offline' media server state to 'Deactivated' state as follows:
Run PATCH /config/media-servers/{hostName} API to change the 'machineState' of Offline media server to .
OrExecute the following command to exec in primary server pod:
kubectl exec -it -n <namespace> <primaryServer-pod-name> -- bash
Execute the following command for Offline media server:
nbemmcmd -updatehost -machinename <offline-media-server-hostname> -machinestateop set_admin_pause -machinetype media -masterserver <primary-server-hostname>
Note:
For elasticity, scaled in media servers are marked as by media server elasticity.
Primary server is not present in Media server of default storage server when value is set to 0 and the following error appears in netbackup-operator-pod logs:
Error in registering additional media servers in storage server. Please add manually.
Run the following command to obtain the netbackup-operator-pod logs:
kubectl logs <netbackup-operator-pod-name> -c netbackup-operator -n <netbackup operator-namespace>
To resolve the issue, perform one of the following:
Set the value of to a value greater than 0 and wait for atleast one media server pod to be in ready state.
After media server pod goes into running state then the value of can be set to 0.
Use the following command to update the value of in mediaServer section:
kubectl edit environment <envrionemnt-cr-name> -n <namespace>
Save the changes.
Note:
For a value of greater than 0, the primary server is not present in Media servers of default storage server.