NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Preparing the environment for NetBackup installation on Kubernetes cluster
- Prerequisites for Snapshot Manager (AKS/EKS)
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Prerequisites for deploying environment operators
- Prerequisites for using private registry
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Managing media server configurations in Web UI
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving issues when media server PVs are deleted
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Job remains in queue for long time
- Extracting logs if the nbwsapp or log-viewer pods are down
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Configuring an External Certificate Authority for Web UI port 443
During Cloud Scale installation, a Cloud Scale environment is configured to use a certificate issued by the NetBackup Certificate Authority for Web UI (port 443). The following sections provide the steps to replace the default certificate with an External Certificate Authority (ECA) on Cloud Scale.
Ensure that all the following files are available and ready before proceeding:
ca.pem: A PEM-formatted file, containing the Root CA certificate from which the Web UI certificate was issued.cert.pem: A PEM-formatted file, that includes the Web UI certificate chain, consisting of the leaf certificate and any intermediate CA certificates.privatekey.pem: PKCS #8 PEM-formatted file, containing the encrypted private key for the Web UI certificate.passphrase.txt,: A plaintext file, containing the passphrase used to encrypt the private key. Ensure that the plaintext file contains only a single line with the passphrase and does not end with a new line.
Basic sanity verification for ca.pem, cert.pem, privatekey.pem and passphrase.txt, files:
- Execute the following commands and ensure that the output of both the commands result in a matching value for
cert.pemandprivatekey.pemfiles:$ openssl x509 -noout -modulus -in cert.pem | openssl sha256
$ openssl rsa -noout -modulus -in privatekey.pem -passin file:passphrase.txt | openssl sha256
- Check the validity of the certificate and presence of the primary server name in the certificate by executing the following command to list the certificate details:
$ openssl x509 -text -in cert.pem -noout
Verify the Not Before and Not After fields display the certificate is valid and has not expired. Confirm that the primary server name appears in the X509v3 Subject Alternative Name. If X509v3 Subject Alternative Name is missing, then confirm that the primary server name appears in the Subject as a CN.
- Execute the following command to verify that a complete certificate chain exists in
cert.pemandca.pemfiles:$ openssl verify -CAfile ca.pem -untrusted cert.pem cert.pem
cert.pem: OK
Perform the steps described in the following procedure to configure external certificate for the first time.
Configure an external certificate for the first time
- Log in to the host where the Kubernetes cluster is managed and has the kubectl command.
- Execute the following commands to create the
tpcredentialsdirectory and copy the artifacts into the primary pod:$ kubectl exec -it <primary_pod_name> -n <namespace> -- mkdir -p /usr/openv/var/global/wsl/credentials/tpcredentials
$ kubectl cp <path_to_ca.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/ca.pem -n <namespace>
$ kubectl cp <path_to_cert.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/cert.pem -n <namespace>
$ kubectl cp <path_to_privatekey.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/privatekey.pem -n <namespace>
$ kubectl cp <path_to_passphrase.txt> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt -n <namespace>
- Restart the nbwsapp pod.
Use the following commands to list all the pods from the namespace, then select the nbwsapp pod and delete it.
$ kubectl get pods -n <namespace>
$ kubectl delete pod <nbwsapp_pod_name> -n <namespace>
- Use the following command to ensure that the nbwsapp pod is up and running:
$ kubectl get pods -n <namespace> | grep nbwsapp
<nbwsapp_pod_name> 4/4 Running 0 9m44s
- Execute the following commands to create the keystore:
$ kubectl exec -it <nbwsapp_pod_name> -n <namespace> -- bash
$ cp /usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt /usr/openv/var/global/wsl/credentials/tpcredentials/jkskey
$ /usr/openv/netbackup/bin/goodies/vxsslcmd pkcs12 -export -inkey /usr/openv/var/global/wsl/credentials/tpcredentials/privatekey.pem -in /usr/openv/var/global/wsl/credentials/tpcredentials/cert.pem -out /tmp/cert.p12 -passin file:/usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt -passout file:/usr/openv/var/global/wsl/credentials/tpcredentials/jkskey -name eca
Note:
Ignore the following error message if displayed by the last command above:
unable to write 'random state'
$ ls -l /tmp/cert.p12
-rw-r--r-- 1 nbwebsvc nbwebgrp 4420 Sep 20 19:44 cert.p12
$ export KEYSTORE_PASS=$(cat /usr/openv/var/global/wsl/credentials/tpcredentials/jkskey)
$ /usr/lib/jvm/jre/bin/keytool -storetype BCFKS -providerpath /usr/openv/wmc/webserver/lib/ccj.jar -providerclass com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider -importkeystore -srckeystore /tmp/cert.p12 -srcstoretype pkcs12 -srcstorepass ${KEYSTORE_PASS} -destkeystore /tmp/nbwebservice.bcfks -deststorepass ${KEYSTORE_PASS}
Importing keystore /tmp/cert.p12 to /tmp/nbwebservice.bcfks... Entry for alias eca successfully imported. Import command completed: 1 entries successfully imported, 0 entries failed or cancelled
$ mv /tmp/nbwebservice.bcfks /usr/openv/var/global/wsl/credentials/tpcredentials/
- The new certificate will be automatically applied within 30 minutes, or you can restart the requestrouter pod to apply it immediately by using the following commands:
$ kubectl get pods -n <namespace>
$ kubectl delete pod <requestrouter_pod_name> -n <namespace>
Use the following steps to replace the existing external certificate with a new external certificate:
Log in to the host where the Kubernetes cluster is managed and has the kubectl command.
Execute the following commands to copy the new artifacts into the primary pod:
$ kubectl exec -it <primary_pod_name> -n <namespace> -- mkdir -p /usr/openv/var/global/wsl/credentials/tpcredentials
$ kubectl cp <path_to_ca.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/ca.pem -n <namespace>
$ kubectl cp <path_to_cert.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/cert.pem -n <namespace>
$ kubectl cp <path_to_privatekey.pem> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/privatekey.pem -n <namespace>
$ kubectl cp <path_to_passphrase.txt> <primary_pod_name>:/usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt -n <namespace>
$ kubectl exec -it <primary_pod_name> -n <namespace> -- mkdir -p /usr/openv/var/global/wsl/credentials/tpcredentials/backup
$ kubectl exec -it <primary_pod_name> -n <namespace> -- mv /usr/openv/var/global/wsl/credentials/tpcredentials/nbwebservice.bcfks /usr/openv/var/global/wsl/credentials/tpcredentials/backup/
Restart the nbwsapp pod.
List all the pods from the namespace, select the nbwsapp pod and delete it.
$ kubectl get pods -n <namespace>
$ kubectl delete pod <nbwsapp_pod_name> -n <namespace>
Ensure that nbwsapp pod is up and running:
$ kubectl get pods -n <namespace> | grep nbwsapp
<nbwsapp_pod_name> 4/4 Running 0 9m44s
Execute the following commands to create the keystore:
$ kubectl exec -it <nbwsapp_pod_name> -n <namespace> -- bash
$ chmod 2750 /usr/openv/var/global/wsl/credentials/tpcredentials/backup/
$ /usr/openv/netbackup/bin/goodies/vxsslcmd pkcs12 -export -inkey /usr/openv/var/global/wsl/credentials/tpcredentials/privatekey.pem -in /usr/openv/var/global/wsl/credentials/tpcredentials/cert.pem -out /tmp/cert.p12 -passin file:/usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt -passout file:/usr/openv/var/global/wsl/credentials/tpcredentials/jkskey -name eca
Note:
Ignore the following message from last command if it is seen:
unable to write 'random state'
$ ls -l /tmp/cert.p12
-rw-r--r-- 1 nbwebsvc nbwebgrp 4420 Sep 20 19:44 cert.p12
$ export KEYSTORE_PASS=$(cat /usr/openv/var/global/wsl/credentials/tpcredentials/jkskey)
$ /usr/lib/jvm/jre/bin/keytool -storetype BCFKS -providerpath /usr/openv/wmc/webserver/lib/ccj.jar -providerclass com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider -importkeystore -srckeystore /tmp/cert.p12 -srcstoretype pkcs12 -srcstorepass ${KEYSTORE_PASS} -destkeystore /tmp/nbwebservice.bcfks -deststorepass ${KEYSTORE_PASS}
Importing keystore /tmp/cert.p12 to /tmp/nbwebservice.bcfks... Entry for alias eca successfully imported. Import command completed: 1 entries successfully imported, 0 entries failed or cancelled
$ mv /tmp/nbwebservice.bcfks /usr/openv/var/global/wsl/credentials/tpcredentials/
The new certificate will be automatically applied within 30 minutes, or you can restart the requestrouter pod to apply it immediately.
$ kubectl get pods -n <namespace>
$ kubectl delete pod <requestrouter_pod_name> -n <namespace>
Perform the following steps to remove the external certificate for the Web UI (port 443) and replace it with the NetBackup Certificate Authority issued certificate:
Log in to the host where the Kubernetes cluster is managed and has the kubectl command.
Execute the following commands to remove the ECAs:
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/ca.pem
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/cert.pem
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/privatekey.pem
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/passphrase.txt
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/nbwebservice.bcfks
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -f /usr/openv/var/global/wsl/credentials/tpcredentials/jkskey
$ kubectl exec -it <primary_pod_name> -n <namespace> -- rm -rf /usr/openv/var/global/wsl/credentials/tpcredentials
Restart the requestrouter pod.
List all of the pods from the namespace, select the requestrouter pod and delete it.
$ kubectl get pods -n <namespace>
$ kubectl delete pod <requestrouter_pod_name> -n <namespace>
Note:
No certificate configuration is required after completing a disaster recovery of the Cloud Scale environment, as the process automatically restores external certificates.