Cohesity Cloud Scale Technology Manual Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Preparing the environment for NetBackup installation on Kubernetes cluster
- Prerequisites for Snapshot Manager (AKS/EKS)
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Prerequisites for deploying environment operators
- Prerequisites for using private registry
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Managing media server configurations in Web UI
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving issues when media server PVs are deleted
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in cloudscale-values.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Job remains in queue for long time
- Extracting logs if the nbwsapp or log-viewer pods are down
- Helm installation failed with bundle error
- Deployment fails with private container registry and Postgres fails to pull the images
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Resolving the primary server connection issue
- NetBackup Snapshot Manager deployment on EKS fails
- Wrong EFS ID is provided in cloudscale-values.yaml file
- Primary pod is in ContainerCreating state
- Webhook displays an error for PV not found
- Cluster Autoscaler initialization issue
- Catalog backup job fails with an error (Status 9202)
- Troubleshooting issue for bootstrapper pod
- Troubleshooting issues for kubectl plugin
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Troubleshooting issues for kubectl plugin
If the primary key is manually deleted by editing the secrets/sp-keys in the environment, the key is not automatically recreated by the system. This results in missing or invalid service principal keys, which can affect environment operations.
Workaround:
Perform the following steps to manually recreate the deleted key:
Pause the environment reconciler using helm command to set the following parameter:
paused: true
Delete the existing secret by running the following command:
- kubectl delete secrets/sp-keys -n netbackup
Delete all the API keys using the following API's call with a valid JWT token:
Trigger GET netbackup/security/service-principal-configs API.
Find all the service principal configurations where and note the value of for all these service principal configurations.
Trigger DELETE security/service-principal-configs/{id} API for each captured above.
Restart the operator pod by deleting the operator pod:
- kubectl delete pod <operator-pod-name> -n netbackup-operator-system
Un-pause the environment reconciler using helm command to set the following parameter:
paused: false
During the Cloud Scale installation, if the process breaks or stops midway, the deployment does not complete successfully and must be resumed from the point of failure.
Workaround:
To resume the installation process:
Run the following command:
kubectl-cloudscale install
When prompted to resume the installation, type to continue from where it stopped.
The plugin will automatically read the previously saved inputs and resume the installation from the point of interruption.
If a user enters an incorrect Cloud Scale folder path or provides any wrong input during the upgrade process, there is no option to correct it. Re-triggering the plugin simply skips to the next question without allowing the user to re-enter inputs.
Workaround:
To reset and re-enter all upgrade inputs:
Delete the following file:
rm /home/<user-name>/.cloudscale/upgrade.csv
Re-run the upgrade command:
kubectl-cloudscale upgrade
The plugin will prompt for all required user inputs again.
Following is an example of the logs:
*******************************Checking for cert-manager******************************** {Component: Installation of helm chart, ComponentName: jetstack/cert-manager} INFO: 2025/09/26 12:38:32 logger.go:132: Checking if Cert Manager is installed or not {Component: Get pod status, ComponentName: Dependency of Cloudscale component} INFO: 2025/09/26 12:38:32 logger.go:132: Waiting 10 seconds before retrying... {Component: Get pod status, ComponentName: Dependency of Cloudscale component} INFO: 2025/09/26 12:38:50 logger.go:132: Cloudscale Upgrade Before you proceed with upgrade, please ensure the following prerequisites are in place: 1. Infrastructure readiness: - The Kubernetes cluster is up and running - Cloudscale environment is up and running 2. Required container images: - All Cloudscale-related images of the version you would like to upgrade to are pushed to your container registry 3. Helm setup: - Helm is installed and configured - The "jetstack" repository is added: helm repo add jetstack https://charts.jetstack.io - The cert-manager and trust-manager charts are installed via helm Also review the 'Prerequisites for Cloud Scale Technology upgrade' section in the 'NetBackup™ Deployment Guide for Kubernetes Clusters' document for additional required steps. Once everything is ready, you can safely continue with the upgrade. {Component: CloudScale, ComponentName: Upgrade of CloudScale} INFO: 2025/09/26 12:38:50 logger.go:132: Would you like to continue? (y/n): {Component: CloudScale, ComponentName: Upgrade of CloudScale} INFO: 2025/09/26 12:39:00 logger.go:132: Helm Version: version.BuildInfo{Version:"v3.18.6", GitCommit:"b76a950f6835474e0906b96c9ec68a2eff3a6430", GitTreeState:"clean", GoVersion:"go1.24.6"} {Component: Precheck Config, ComponentName: Installation of CloudScale} INFO: 2025/09/26 12:39:00 logger.go:132: kubectl Version: Client Version: v1.34.1 Kustomize Version: v5.7.1 Server Version: v1.32.6 {Component: Precheck Config, ComponentName: Installation of CloudScale} INFO: 2025/09/26 12:39:00 logger.go:132: ************************************************** {Component: CloudScale Upgrade, ComponentName: Input Configuration} INFO: 2025/09/26 12:39:00 logger.go:129: Checking if the input file already exists. INFO: 2025/09/26 12:39:00 logger.go:132: Data is being read from an input file that is present. {Component: CloudScale, ComponentName: Input Configuration} INFO: 2025/09/26 12:39:00 logger.go:132: The following values were loaded from the file: {Component: CloudScale Upgrade, ComponentName: Input Configuration}
The plugin was cancelled while checking for Cert Manager installation. When the upgrade was triggered again, it skipped the Cert Manager validation step and proceeded to the next question instead of restarting the validation process.
Workaround:
Delete the following file and re-run the upgrade:
rm /home/<user-name>/.cloudscale/upgrade.csv
The plugin was cancelled before completing the operator upgrade. When the upgrade was triggered again, it failed due to the Helm release being stuck in a pending-upgrade state with the following error:
Operator Namespace : netbackup-operator-system Upgrade of operators started... Helm upgrade of operators failed with error: another operation (install/upgrade/rollback) is in progress Operators chart failed to upgrade. Error while upgrading operators chart : another operation (install/upgrade/rollback) is in progress Upgrade failed:
Workaround:
To resolve this issue:
Check for pending Helm releases:
helm ls -A --pending
If the operator's release is in a pending-upgrade state, list previous revisions:
helm history operators --namespace <operator_namespace>
Identify a revision with the status or and roll back to that version:
helm rollback operators <REVISION> --namespace <operator_namespace>
This clears the pending-upgrade lock caused by the interrupted upgrade.
Once the rollback completes, rerun the plugin upgrade command:
kubectl-cloudscale upgrade
When prompted to resume the installation, type to continue from where it stopped.
The plugin crashed while annotating the environment resources immediately after the operator upgrade.
Workaround:
Trigger the upgrade process again using the
kubectl-cloudscaleplugin:kubectl-cloudscale upgrade
When prompted to resume the installation, type to continue the upgrade from where it stopped.
If the plugin crashes at any point before Helm triggers the Cloud Scale upgrade command, the upgrade process is interrupted and cannot complete successfully.
Workaround:
Trigger the upgrade process using the
kubectl-cloudscaleplugin:kubectl-cloudscale upgrade
When prompted to resume the installation, type to continue the upgrade from where it stopped.
The media server pod is continuously restarting. Upon inspection, it is observed that the mount point /mnt/nblogs inside the media pod is in a read-only state, which is likely causing the restarts.
Perform the following steps to verify if the mount point is read-only
- Exec the following command into the media pod:
kubectl exec -it <media-pod-name> -- bash
- Run the following command to check the mount point permissions:
cd /mnt/nblogs/fluentbit
touch test
Expected output (in case of an issue):
touch: cannot touch 'test': Read-only file system
The above error message confirms that the
/mnt/nblogsmount is mounted as read-only.
Workaround:
Run the following command to manually restart the media pod by deleting the affected media pod:
kubectl delete pod <media-pod-name>
Image validation can fail due to one or more of the following reasons:
Invalid container registry name.
An incorrect or non-existent registry name was provided.
Example:
Image validation failed, failed to configure transport: error pinging v2 registry: Get "https://wrong.azurecr.io/v2/": dial tcp: lookup wrong.azurecr.io on 168.63.129.16:53: no such host
Container registry not logged in on the host VM.
The container registry is not logged in using the same user account that runs the plugin on the host VM.
Example:
Image validation failed, Get "https://CloudscaleACRshantaram.azurecr.io/v2/netbackup/dbm/manifests/11.1.0.2-0013": unauthorized: authentication required
Incorrect or non-existent image tags.
The specified image tags do not exist or have not been pushed to the container registry.
Example:
Image validation failed, no such manifest: nbuk8sreg.azurecr.io/netbackup/dbm:wrong
Workaround: Ensure that all inputs are correct and that the container registry is logged in properly.
To skip image validation for tags, run the following command:
kubectl create configmap cs-image-validation-config -n <netbackup-namespace> --from-literal=SKIP_IMAGE_VALIDATION=true