NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Upgrade Cloud Scale
Upgrading Cloud Scale deployment
- Patch file contains updated image names and tags. Operators are responsible for restarting the pods in correct sequence.
Note the following:
Modify the patch file if your current environment CR specifies spec.primary.tag or spec.media.tag. The patch file listed below assumes the default deployment scenario where only spec.tag and spec.msdpScaleouts.tag are listed.
When upgrading from embedded Postgres to containerized Postgres add dbSecretName to the patch file.
If the images for the new release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
In case of Cloud Scale upgrade, if the capacity of the primary server log volume is greater than the default value, you need to modify the primary server log volume capacity (spec.primary.storage.log.capacity) to default value that is, 30Gi . After upgrading to version 10.5 or later, the decoupled services log volume should use the default log volume, while the primary pod log volume will continue to use the previous log size.
Examples of
.jsonfiles:For
containerized_cloudscale_patch.jsonupgrade from 10.4 or later:[ { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5-xxxx" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.5-0027" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.x-xxxx" } ]For
containerized_cloudscale_patch.jsonwith primary and , media tags but no global tag:[ { "op": "replace", "path": "/spec/dbSecretName", "value": "dbsecret" }, { "op" : "replace" , "path" : "/spec/primary/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/mediaServers/0/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.5" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } ]For
DBAAS_cloudscale_patch.json:Note:
This patch file is to be used only during DBaaS to DBaaS migration.
[ { "op" : "replace" , "path" : "/spec/dbSecretProviderClass" , "value" : "dbsecret-spc" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.5" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } ]For
Containerized_cloudscale_patch.json with new container registry:Note:
If the images for the latest release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
[ { "op" : "replace" , "path" : "/spec/dbSecretName" , "value" : "dbsecret" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.5" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } { "op" : "replace" , "path" : "/spec/containerRegistry" , "value" : "newacr.azurecr.io" }, { "op" : "replace" , "path" : "/spec/cpServer/0/containerRegistry" , "value" : "newacr.azurecr.io" } ]
- Use the following command to obtain the environment name:
$ kubectl get environments -n netbackup
- Navigate to the directory containing the patch file and upgrade the Cloud Scale deployment as follows:
$ cd scripts/
$ kubectl patch environment <env-name> --type json -n netbackup --patch-file cloudscale_patch.json
- Wait until Environment CR displays the status as ready. During this time pods are expected to restart and any new services to start. Operators are responsible for restarting the pods in the correct sequence.
The status of the upgrade for Primary, Msdp, Media and CpServer are displayed as follows:
/VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Primary
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Msdp
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Media
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading CpServer
- Log into the primary server and use the following command to resume the backup job processing by using the following commands:
kubectl exec -it pod/,primary-podname> -n netbackup -- bash
# nbpemreq -resume_scheduling