NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for Primary and Media servers
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Upgrading Cloud Scale deployment for Postgres using Helm charts
Before upgrading Cloud Scale deployment for Postgres using Helm charts, ensure that:
Helm charts for operators and Postgres are available from a public or private registry.
Images for operators, and Cloud Scale services are available form a public or private registry.
Note:
During the upgrade process, ensure that the cluster nodes are not scaled down to 0 or restarted.
To upgrade operators
- Run the following script when upgrading from an earlier release of Cloud Scale that used a single helm chart or the kustomize deployment method:
scripts/prep_operators_for_upgrade.sh
- Use the following command to suspend the backup job processing:
nbpemreq -suspend_scheduling
- Perform the following steps to upgrade the operators:
Use the following command to save the operators chart values to a file:
# helm show values operators-<version>.tgz > operators-values.yaml
Use the following command to edit the chart values to match your deployment scenario:
# vi operators-values.yaml
Execute the following command to upgrade the operators:
helm upgrade --install operators operators-<version>.tgz -f operators-values.yaml -n netbackup-operator-system
Or
If using the OCI registry, use the following command:
helm upgrade --install operators oci://abcd.veritas.com:5000/helm-charts/operators --version <version> -f operators-values.yaml -n netbackup-operator-system
Following is an example for
operators-values.yamlfile:# Copyright (c) 2023 Veritas Technologies LLC. All rights reserved # Default values for operators. # This is a YAML-formatted file. # Declare variables to be passed into your templates. global: # Toggle for platform-specific features & settings # Microsoft AKS: "aks" # Amazon EKS: "eks" platform: "eks" # This specifies a container registry that the cluster has access to. # NetBackup images should be pushed to this registry prior to applying this # Environment resource. # Example Azure Container Registry name: # example.azurecr.io # Example AWS Elastic Container Registry name: # 123456789012.dkr.ecr.us-east-1.amazonaws.com containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com/ECR Name" operatorNamespace: "netbackup-operator-system" storage: eks: fileSystemId: fs-0f3cc640eeec507d0 msdp-operator: image: name: msdp-operator # Provide tag value in quotes eg: '17.0' tag: "20.4" pullPolicy: Always namespace: labels: control-plane: controller-manager # This determines the path used for storing core files in the case of a crash. corePattern: "/core/core.%e.%p.%t" # This specifies the number of replicas of the msdp-operator controllers # to create. Minimum number of supported replicas is 1. replicas: 2 # Optional: provide label selectors to dictate pod scheduling on nodes. # By default, when given an empty {} all nodes will be equally eligible. # Labels should be given as key-value pairs, ex: # agentpool: mypoolname nodeSelector: {} # Storage specification to be used by underlying persistent volumes. # References entries in global.storage by default, but can be replaced storageClass: name: nb-disk-premium size: 5Gi # Specify how much of each resource a container needs. resources: # Requests are used to decide which node(s) should be scheduled for pods. # Pods may use more resources than specified with requests. requests: cpu: 150m memory: 150Mi # Optional: Limits can be implemented to control the maximum utilization by pods. # The runtime prevents the container from using more than the configured resource limits. limits: {} logging: # Enable verbose logging debug: false # Maximum age (in days) to retain log files, 1 <= N <= 365 age: 28 # Maximum number of log files to retain, 1 <= N =< 20 num: 20 nb-operator: image: name: "netbackup/operator" tag: "10.4" pullPolicy: Always # nb-operator needs to know the version of msdp and flexsnap operators for webhook # to do version checking msdp-operator: image: tag: "20.4" flexsnap-operator: image: tag: "10.4.0.0.1016" namespace: labels: nb-control-plane: nb-controller-manager nodeSelector: node_selector_key: agentpool node_selector_value: agentpool #loglevel: # "-1" - Debug (not recommended for production) # "0" - Info # "1" - Warn # "2" - Error loglevel: value: "0" flexsnap-operator: replicas: 1 namespace: labels: {} image: name: "veritas/flexsnap-deploy" tag: "10.4.0.1004" pullPolicy: Always nodeSelector: node_selector_key: agentpool node_selector_value: agentpool
To upgrade Cloud Scale deployment
- Run the following command as a workaround for deploying the trust manager:
helm repo add jetstack https://charts.jetstack.io --force-update kubectl create namespace trust-manager helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.7.0 --wait
- Upload the new images to your private registry.
Note:
Skip this step when using Veritas registry.
- Use the following command to suspend the backup job processing:
nbpemreq -suspend_scheduling
- Perform the following steps when installing the PostgreSQL database.
Note:
This step is not applicable when using DBaaS.
Use the following command to save the PostgreSQL chart values to a file:
# helm show values postgresql-<version>.tgz > postgres-values.yaml
Use the following command to edit the chart values:
vi postgres-values.yaml
Execute the following command to upgrade the PostgreSQL database:
# helm upgrade --install postgresql postgresql-<version>.tgz -f postgres-values.yaml -n netbackup
Or
If using the OCI registry, use the following command:
helm upgrade --install postgresql oci://abcd.veritas.com:5000/helm-charts/netbackup-postgresql --version <version> -f postgres-values.yaml -n netbackup
Following is an example for
postgres-values.yamlfile:# Copyright (c) 2024 Veritas Technologies LLC. All rights reserved # Default values for postgresql. global: environmentNamespace: "netbackup" containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com" postgresql: replicas: 1 # The values in the image (name, tag) are placeholders. These will be set # when the deploy_nb_cloudscale.sh runs. image: name: "netbackup/postgresql" tag: "14.11.1.0" pullPolicy: Always service: serviceName: nb-postgresql volume: volumeClaimName: nb-psql-pvc volumeDefaultMode: 0640 pvcStorage: 5Gi # configMapName: nbpsqlconf storageClassName: nb-disk-premium mountPathData: /netbackup/postgresqldb secretMountPath: /netbackup/postgresql/keys/server # mountConf: /netbackup timezone: null securityContext: runAsUser: 0 createCerts: true # pgbouncerIniPath: /netbackup/pgbouncer.ini serverSecretName: postgresql-server-crt clientSecretName: postgresql-client-crt dbSecretName: dbsecret dbPort: 13785 pgbouncerPort: 13787 dbAdminName: postgres initialDbAdminPassword: postgres dataDir: /netbackup/postgresqldb # postgresqlConfFilePath: /netbackup/postgresql.conf # pgHbaConfFilePath: /netbackup/pg_hba.conf defaultPostgresqlHostName: nb-postgresql To save $$ you can set storageClassName to nb-disk-standardssd for non-production environments.Note:
For Postgres pod not to be scheduled on any other nodes other than primary nodepool, then add Kubernetes taints to the Media, MSDP and flexsnap/Snapshot Manager nodepool.
If primary node pool has taints applied, then you manually add tolerations to the PostgreSQL StatefulSet as follows:
To verify that node pools use taints, run the following command:
kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
NodeName TaintKey TaintValue TaintEffect ip-10-248-231-149.ec2.internal <none> <none> <none> ip-10-248-231-245.ec2.internal <none> <none> <none> ip-10-248-91-105.ec2.internal nbupool agentpool NoSchedule
To view StatefulSets, run the following command:
kubectl get statefulsets -n netbackup
NAME READY AGE nb-postgresql 1/1 76m nb-primary 0/1 51m
Edit the PostgreSQL StatefulSets and add tolerations as follows:
kubectl edit statefulset nb-postgresql -n netbackup
Following is an example of the modified PostgreSQL StatefulSets:
apiVersion: apps/v1 kind: StatefulSet metadata: annotations: meta.helm.sh/release-name: postgresql meta.helm.sh/release-namespace: netbackup creationTimestamp: "2024-03-25T15:11:59Z" generation: 1 labels: app: nb-postgresql app.kubernetes.io/managed-by: Helm name: nb-postgresql ... spec: template: spec: containers: ... nodeSelector: nbupool: agentool tolerations: - effect: NoSchedule key: nbupool operator: Equal value: agentpool - Perform the following to create Secret containing DBaaS CA certificates:
Note:
This step is not applicable when using containerized Postgres.
For AWS:
TLS_FILE_NAME='/tmp/tls.crt' PROXY_FILE_NAME='/tmp/proxy.pem' rm -f ${TLS_FILE_NAME} ${PROXY_FILE_NAME} DB_CERT_URL="https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem" DB_PROXY_CERT_URL="https://www.amazontrust.com/repository/AmazonRootCA1.pem" curl ${DB_CERT_URL} --output ${TLS_FILE_NAME} curl ${DB_PROXY_CERT_URL} --output ${PROXY_FILE_NAME} cat ${PROXY_FILE_NAME} >> ${TLS_FILE_NAME} kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${TLS_FILE_NAME}For Azure:
TLS_FILE_NAME='/tmp/tls.crt' rm -f ${TLS_FILE_NAME} DB_CERT_URL="https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem" curl ${DB_CERT_URL} --output ${TLS_FILE_NAME} kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${TLS_FILE_NAME}
- Create db cert bundle as follows:
Note:
This step must be performed when using containerized Postgres.
cat <<EOF | kubectl apply -f - apiVersion: trust.cert-manager.io/v1alpha1 kind: Bundle metadata: name: db-cert spec: sources: - secret: name: "postgresql-netbackup-ca" key: "tls.crt" target: namespaceSelector: matchLabels: kubernetes.io/metadata.name: "$ENVIRONMENT_NAMESPACE" configMap: key: "dbcertpem" EOFAfter installing db-cert bundle, ensure that you have db-cert configMap present in
netbackupnamespace with size 1 as follows:bash-5.1$ kubectl get configmap db-cert -n $ENVIRONMENT_NAMESPACE NAME DATA AGE db-cert 1 11h
Note:
If the configMap is showing the size as 0, then delete it and ensure that the trust-manager recreates it before proceeding further.
- Perform the following steps to upgrade the Cloud Scale deployment:
Use the following command to obtain the environment name:
$ kubectl get environments -n $ENVIRONMENT_NAMESPACE
Navigate to the directory containing the patch file and upgrade the Cloud Scale deployment as follows:
$ cd scripts/
$ kubectl patch environment <env-name> --type json -n $ENVIRONMENT_NAMESPACE --patch-file cloudscale_patch.json
Modify the patch file if your current environment CR specifies spec.primary.tag or spec.media.tag. The patch file listed below assumes the default deployment scenario where only spec.tag and spec.msdpScaleouts.tag are listed.
Note:
When upgrading from embedded Postgres to containerized Postgres, add dbSecretName to the patch file.
Examples of
.jsonfiles:For
containerized_cloudscale_patch.json:[ { "op": "replace", "path": "/spec/dbSecretName", "value": "dbsecret" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.4" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.4.0.1074" } ]For
containerized_cloudscale_patch.jsonwith primary, media tags and global tag:[ { "op": "replace", "path": "/spec/dbSecretName", "value": "dbsecret" }, { "op" : "replace" , "path" : "/spec/primary/tag" , "value" : "10.4" }, { "op" : "replace" , "path" : "/spec/mediaServers/0/tag" , "value" : "10.4" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.4.0.1074" } ]For
DBAAS_cloudscale_patch.json:Note:
This patch file is to be used only during DBaaS to DBaaS migration.
[ { "op" : "replace" , "path" : "/spec/dbSecretProviderClass" , "value" : "dbsecret-spc" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.4" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.4.0.1101" } ]
- Wait until Environment CR displays the status as ready. During this time pods are expected to restart and any new services to start.
Run the following command to check the environment CR status:
- kubectl get environment -n "namespace"
For example,
kubectl get environment -n netbackup NAME READY AGE STATUS anshannewtesttrtm 4/4 57m Success
- Resume the backup job processing by using the following command:
# nbpemreq -resume_scheduling