NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- MSDP Scaleout configuration
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Upgrade Cloud Scale
Note:
During upgrade ensure that the value of minimumReplica of media server CR is same as that of media server before upgrade.
Upgrading Cloud Scale deployment
- Patch file contains updated image names and tags. Operators are responsible for restarting the pods in correct sequence.
Note the following:
Modify the patch file if your current environment CR specifies spec.primary.tag or spec.media.tag. The patch file listed below assumes the default deployment scenario where only spec.tag and spec.msdpScaleouts.tag are listed.
When upgrading from embedded Postgres to containerized Postgres add dbSecretName to the patch file.
If the images for the new release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
In case of Cloud Scale upgrade, if the capacity of the primary server log volume is greater than the default value, you need to modify the primary server log volume capacity (spec.primary.storage.log.capacity) to default value that is, 30Gi . After upgrading to version 10.5 or later, the decoupled services log volume should use the default log volume, while the primary pod log volume will continue to use the previous log size.
Examples of
.json
files:For
containerized_cloudscale_patch.json
upgrade from 10.4 or later:[ { "op" : "replace" , "path" : "/spec/tag" , "value" : "11.0-xxxx" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "21.0-xxxx" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "11.0.x.x-xxxx" } ]
For
containerized_cloudscale_patch.json
with primary and , media tags but no global tag:[ { "op": "replace", "path": "/spec/dbSecretName", "value": "dbsecret" }, { "op" : "replace" , "path" : "/spec/primary/tag" , "value" : "11.0" }, { "op" : "replace" , "path" : "/spec/mediaServers/0/tag" , "value" : "11.0" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "21.0" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "11.0.x.xxxxx" } ]
For
DBAAS_cloudscale_patch.json
:Note:
This patch file is to be used only during DBaaS to DBaaS migration.
[ { "op" : "replace" , "path" : "/spec/dbSecretProviderClass" , "value" : "dbsecret-spc" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "11.0" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "21.0" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "11.0.x.xxxxx" } ]
For
Containerized_cloudscale_patch.json with new container registry
:Note:
If the images for the latest release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
[ { "op" : "replace" , "path" : "/spec/dbSecretName" , "value" : "dbsecret" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "11.0" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "21.0" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "11.0.x.xxxxx" } { "op" : "replace" , "path" : "/spec/containerRegistry" , "value" : "newacr.azurecr.io" }, { "op" : "replace" , "path" : "/spec/cpServer/0/containerRegistry" , "value" : "newacr.azurecr.io" } ]
- Use the following command to obtain the environment name:
$ kubectl get environments -n netbackup
- Navigate to the directory containing the patch file and upgrade the Cloud Scale deployment as follows:
$ cd scripts/
$ kubectl patch environment <env-name> --type json -n netbackup --patch-file cloudscale_patch.json
- Wait until Environment CR displays the status as ready. During this time pods are expected to restart and any new services to start. Operators are responsible for restarting the pods in the correct sequence.
The status of the upgrade for Primary, Msdp, Media and CpServer are displayed as follows:
/VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Primary
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Msdp
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading Media
VRTSk8s-netbackup-<version>/scripts$ kubectl get environment -n netbackup NAME READY AGE STATUS env-testupg 3/4 28h Upgrading CpServer
Note the following: During upgrade, pods would be restarted, and the environment may temporarily display a "failed" status due to the following error: Wait until the CR status is ready.
# kubectl get environment -n netbackup NAME READY AGE STATUS env-vks-vksautomation 3/4 3d3h Failed # kubectl describe environment -n netbackup Status: Error Details: Code: 8448 Message: Cannot prepare the NetBackup API Client. Msdp Scaleouts Status: dedupe1: Key Group Secret: dedupe1-kms-key-info Token Expiration: 2025-02-06T14:57:37Z Ready: 3/4 State: Failed Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning MSDPScaleoutNotReady 26m (x21 over 26m) environment-controller Not all MSDP resources are ready Warning Failed 6m5s (x377 over 41m) environment-controller Error preparing API client Warning MediaNotReady 42s (x109 over 23m) environment-controller Not all Media resources are ready
Wait until the CR status is ready.
- Log into the primary server and use the following command to resume the backup job processing by using the following commands:
kubectl exec -it pod/,primary-podname> -n netbackup -- bash
# nbpemreq -resume_scheduling
Post upgrade, the flexsnap-listener pod would be migrated to cp control nodepool as per the node selector settings in the environment CR. To reduce the TCO, user can change the minimum size of CP data nodepool to 0 through the portal.
Post upgrade, for cost optimization, user has the option to change the value of
of media server CR to 0. User can change the minimum size of media nodepool to 0 through the portal.