Cohesity Cloud Scale Technology Manual Deployment Guide for Kubernetes Clusters

Last Published:
Product(s): NetBackup (11.1.0.2)
  1. Introduction
    1. About Cloud Scale deployment
      1.  
        Decoupling of NetBackup web services from primary server
      2.  
        Decoupling of NetBackup Policy and Job Management from primary server
      3.  
        Decoupling of NetBackup Database Manager from primary server
      4.  
        Logging feature (fluentbit) in Cloud Scale
    2.  
      About NetBackup Snapshot Manager
    3.  
      Required terminology
    4.  
      User roles and permissions
  2. Section I. Configurations
    1. Prerequisites
      1.  
        Preparing the environment for NetBackup installation on Kubernetes cluster
      2.  
        Prerequisites for Snapshot Manager (AKS/EKS)
      3. Prerequisites for Kubernetes cluster configuration
        1.  
          Config-Checker utility
        2.  
          Data-Migration for AKS
        3.  
          Webhooks validation for EKS
      4. Prerequisites for Cloud Scale configuration
        1.  
          Cluster specific settings
        2.  
          Cloud specific settings
      5.  
        Prerequisites for deploying environment operators
      6.  
        Prerequisites for using private registry
    2. Recommendations and Limitations
      1.  
        Recommendations of NetBackup deployment on Kubernetes cluster
      2.  
        Limitations of NetBackup deployment on Kubernetes cluster
      3.  
        Recommendations and limitations for Cloud Scale deployment
    3. Configurations
      1.  
        Contents of the TAR file
      2.  
        Configuring the cloudscale-values.yaml file
      3.  
        Configuring an External Certificate Authority for Web UI port 443
      4. Loading docker images
        1.  
          Installing the docker images for NetBackup
        2.  
          Installing the docker images for Snapshot Manager
        3.  
          Installing the docker images and binaries for MSDP Scaleout
      5. Configuring NetBackup
        1. Primary and media server CR
          1.  
            After installing primary server CR
          2.  
            After Installing the media server CR
        2.  
          Elastic media server
    4. Configuration of key parameters in Cloud Scale deployments
      1.  
        Tuning touch files
      2.  
        Setting maximum jobs per client
      3.  
        Setting maximum jobs per media server
      4.  
        Enabling intelligent catalog archiving
      5.  
        Enabling security settings
      6.  
        Configuring email server
      7.  
        Reducing catalog storage management
      8.  
        Configuring zone redundancy
      9.  
        Enabling client-side deduplication capabilities
      10.  
        Parameters for logging (fluentbit)
      11.  
        Managing media server configurations in Web UI
  3. Section II. Deployment
    1. Deploying Cloud Scale
      1.  
        How to deploy Cloud Scale
      2.  
        Prerequisites for Cloud Scale deployment
      3.  
        Deploying the operators
      4. Deploying Cloud Scale using Helm chart
        1.  
          Installing Cloud Scale environment
        2. Single node Cloud Scale Technology deployment
          1.  
            Steps to deploy Cloud Scale in single node
      5.  
        Deploying Cloud Scale using kubectl plugin
      6.  
        Verifying Cloud Scale deployment
      7. Post Cloud Scale deployment tasks
        1.  
          Restarting Cloud Scale Technology services
  4. Section III. Monitoring and Management
    1. Monitoring NetBackup
      1.  
        Monitoring the application health
      2.  
        Telemetry reporting
      3.  
        About NetBackup operator logs
      4.  
        Monitoring Primary/Media server CRs
      5.  
        Expanding storage volumes
      6. Allocating static PV for Primary and Media pods
        1.  
          Expanding log volumes for primary pods
        2.  
          Recommendation for media server volume expansion
        3.  
          (AKS-specific) Allocating static PV for Primary and Media pods
        4.  
          (EKS-specific) Allocating static PV for Primary and Media pods
    2. Monitoring Snapshot Manager
      1.  
        Overview
      2.  
        Configuration parameters
      3.  
        Snapshot Manager manual certificate renewal in Cloud Scale
    3. Monitoring fluentbit
      1.  
        Monitoring fluentbit for logging
    4. Monitoring MSDP Scaleout
      1.  
        About MSDP Scaleout status and events
      2.  
        Monitoring with Amazon CloudWatch
      3.  
        Monitoring with Azure Container insights
      4.  
        The Kubernetes resources for MSDP Scaleout and MSDP operator
    5. Managing NetBackup
      1.  
        Managing NetBackup deployment using VxUpdate
      2.  
        Updating the Primary/Media server CRs
      3.  
        Migrating the cloud node for primary or media servers
      4.  
        Migrating cpServer controlPlane node
    6. Managing the Load Balancer service
      1.  
        About the Load Balancer service
      2.  
        Notes for Load Balancer service
      3.  
        Opening the ports from the Load Balancer service
      4.  
        Steps for upgrading Cloud Scale from multiple media load balancer to none
    7. Managing PostrgreSQL DBaaS
      1.  
        Changing database server password in DBaaS
      2.  
        Updating database certificate in DBaaS
    8. Managing logging
      1.  
        Viewing NetBackup logs
      2.  
        Extracting NetBackup logs
    9. Performing catalog backup and recovery
      1.  
        Backing up a catalog
      2. Restoring a catalog
        1.  
          Primary server corrupted
        2.  
          MSDP-X corrupted
        3.  
          MSDP-X and Primary server corrupted
  5. Section IV. Maintenance
    1. PostgreSQL DBaaS Maintenance
      1.  
        Configuring maintenance window for PostgreSQL database in AWS
      2.  
        Setting up alarms for PostgreSQL DBaaS instance
    2. Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
      1.  
        Overview
      2.  
        Patching of primary containers
      3.  
        Patching of media containers
      4.  
        Patching of fluentbit collector pods
      5.  
        Update containerized PostgreSQL pod
    3. Upgrading
      1. Upgrading Cloud Scale Technology
        1.  
          Prerequisites for Cloud Scale Technology upgrade
        2.  
          Upgrade the cluster
        3. Upgrade Cloud Scale
          1.  
            Upgrade Cloud Scale using the kubectl plugin
          2.  
            Manual upgrade of Cloud Scale using the Superchart
        4.  
          Configuring NetBackup IT Analytics for NetBackup deployment
    4. Cloud Scale Disaster Recovery
      1.  
        Cluster backup
      2.  
        Environment backup
      3.  
        Cluster recovery
      4.  
        Cloud Scale recovery
      5.  
        Environment Disaster Recovery
      6.  
        DBaaS Disaster Recovery
    5. Uninstalling
      1.  
        Uninstalling Cloud Scale Technology
    6. Troubleshooting
      1. Troubleshooting AKS and EKS issues
        1.  
          View the list of operator resources
        2.  
          View the list of product resources
        3.  
          View operator logs
        4.  
          View primary logs
        5.  
          Socket connection failure
        6.  
          Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
        7.  
          Resolving the issue where the NetBackup server pod is not scheduled for long time
        8.  
          Resolving an issue where the Storage class does not exist
        9.  
          Resolving an issue where the primary server or media server deployment does not proceed
        10.  
          Resolving an issue of failed probes
        11.  
          Resolving issues when media server PVs are deleted
        12.  
          Resolving an issue related to insufficient storage
        13.  
          Resolving an issue related to invalid nodepool
        14.  
          Resolve an issue related to KMS database
        15.  
          Resolve an issue related to pulling an image from the container registry
        16.  
          Resolving an issue related to recovery of data
        17.  
          Check primary server status
        18.  
          Pod status field shows as pending
        19.  
          Ensure that the container is running the patched image
        20.  
          Getting EEB information from an image, a running container, or persistent data
        21.  
          Resolving the certificate error issue in NetBackup operator pod logs
        22.  
          Pod restart failure due to liveness probe time-out
        23.  
          NetBackup messaging queue broker take more time to start
        24.  
          Host mapping conflict in NetBackup
        25.  
          Issue with capacity licensing reporting which takes longer time
        26.  
          Local connection is getting treated as insecure connection
        27.  
          Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
        28.  
          Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
        29.  
          Taint, Toleration, and Node affinity related issues in cpServer
        30.  
          Operations performed on cpServer in cloudscale-values.yaml file are not reflected
        31.  
          Elastic media server related issues
        32.  
          Failed to register Snapshot Manager with NetBackup
        33.  
          Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
        34.  
          Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
        35.  
          Request router logs
        36.  
          Issues with NBPEM/NBJM
        37.  
          Issues with logging feature for Cloud Scale
        38.  
          The flexsnap-listener pod is unable to communicate with RabbitMQ
        39.  
          Job remains in queue for long time
        40.  
          Extracting logs if the nbwsapp or log-viewer pods are down
        41.  
          Helm installation failed with bundle error
        42.  
          Deployment fails with private container registry and Postgres fails to pull the images
      2. Troubleshooting AKS-specific issues
        1.  
          Data migration unsuccessful even after changing the storage class through the storage yaml file
        2.  
          Host validation failed on the target host
        3.  
          Primary pod goes in non-ready state
      3. Troubleshooting EKS-specific issues
        1.  
          Resolving the primary server connection issue
        2.  
          NetBackup Snapshot Manager deployment on EKS fails
        3.  
          Wrong EFS ID is provided in cloudscale-values.yaml file
        4.  
          Primary pod is in ContainerCreating state
        5.  
          Webhook displays an error for PV not found
        6.  
          Cluster Autoscaler initialization issue
        7.  
          Catalog backup job fails with an error (Status 9202)
      4.  
        Troubleshooting issue for bootstrapper pod
      5.  
        Troubleshooting issues for kubectl plugin
  6. Appendix A. CR template
    1.  
      Secret
    2. MSDP Scaleout CR
      1.  
        MSDP Scaleout CR template for AKS
      2.  
        MSDP Scaleout CR template for EKS
  7. Appendix B. MSDP Scaleout
    1.  
      About MSDP Scaleout
    2.  
      Prerequisites for MSDP Scaleout (AKS\EKS)
    3.  
      Limitations in MSDP Scaleout
    4. MSDP Scaleout configuration
      1.  
        Initializing the MSDP operator
      2.  
        Configuring MSDP Scaleout
      3.  
        Configuring the MSDP cloud in MSDP Scaleout
      4.  
        Using MSDP Scaleout as a single storage pool in NetBackup
      5.  
        Using S3 service in MSDP Scaleout
      6.  
        Enabling MSDP S3 service after MSDP Scaleout is deployed
    5.  
      Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
    6.  
      Deploying MSDP Scaleout
    7. Managing MSDP Scaleout
      1.  
        Adding MSDP engines
      2.  
        Adding data volumes
      3. Expanding existing data or catalog volumes
        1.  
          Manual storage expansion
      4.  
        MSDP Scaleout scaling recommendations
      5. MSDP Cloud backup and disaster recovery
        1.  
          About the reserved storage space
        2. Cloud LSU disaster recovery
          1.  
            Recovering MSDP S3 IAM configurations from cloud LSU
      6.  
        MSDP multi-domain support
      7.  
        Configuring Auto Image Replication
      8. About MSDP Scaleout logging and troubleshooting
        1.  
          Collecting the logs and the inspection information
    8. MSDP Scaleout maintenance
      1.  
        Pausing the MSDP Scaleout operator for maintenance
      2.  
        Logging in to the pods
      3.  
        Reinstalling MSDP Scaleout operator
      4.  
        Migrating the MSDP Scaleout to another node pool

Troubleshooting issues for kubectl plugin

Primary key not recreated after manual deletion from secrets/sp-keys

If the primary key is manually deleted by editing the secrets/sp-keys in the environment, the key is not automatically recreated by the system. This results in missing or invalid service principal keys, which can affect environment operations.

Workaround:

Perform the following steps to manually recreate the deleted key:

  1. Pause the environment reconciler using helm command to set the following parameter:

    paused: true

  2. Delete the existing secret by running the following command:

    - kubectl delete secrets/sp-keys -n netbackup

  3. Delete all the API keys using the following API's call with a valid JWT token:

    • Trigger GET netbackup/security/service-principal-configs API.

    • Find all the service principal configurations where "servicePrincipalType"="netbackup-operator" and note the value of servicePrincipalId for all these service principal configurations.

    • Trigger DELETE security/service-principal-configs/{id} API for each servicePrincipalId captured above.

  4. Restart the operator pod by deleting the operator pod:

    - kubectl delete pod <operator-pod-name> -n netbackup-operator-system

  5. Un-pause the environment reconciler using helm command to set the following parameter:

    paused: false

Cloud Scale installation interrupted midway

During the Cloud Scale installation, if the process breaks or stops midway, the deployment does not complete successfully and must be resumed from the point of failure.

Workaround:

To resume the installation process:

  1. Run the following command:

    kubectl-cloudscale install

  2. When prompted to resume the installation, type yes to continue from where it stopped.

The plugin will automatically read the previously saved inputs and resume the installation from the point of interruption.

Unable to re-enter inputs during Cloud Scale upgrade

If a user enters an incorrect Cloud Scale folder path or provides any wrong input during the upgrade process, there is no option to correct it. Re-triggering the plugin simply skips to the next question without allowing the user to re-enter inputs.

Workaround:

To reset and re-enter all upgrade inputs:

  1. Delete the following file:

    rm /home/<user-name>/.cloudscale/upgrade.csv

  2. Re-run the upgrade command:

    kubectl-cloudscale upgrade

    The plugin will prompt for all required user inputs again.

    Following is an example of the logs:

    *******************************Checking for cert-manager********************************
    {Component: Installation of helm chart, ComponentName: jetstack/cert-manager}
    INFO: 2025/09/26 12:38:32 logger.go:132: Checking if Cert Manager is installed or not {Component: Get pod status, ComponentName: Dependency of Cloudscale component}
    INFO: 2025/09/26 12:38:32 logger.go:132: Waiting 10 seconds before retrying... {Component: Get pod status, ComponentName: Dependency of Cloudscale component}
    INFO: 2025/09/26 12:38:50 logger.go:132: Cloudscale Upgrade
    
    Before you proceed with upgrade, please ensure the following prerequisites are in place:
        
        1. Infrastructure readiness:
           - The Kubernetes cluster is up and running
           - Cloudscale environment is up and running
        
        2. Required container images:
           - All Cloudscale-related images of the version you would like to upgrade to
             are pushed to your container registry
        
        3. Helm setup:
           - Helm is installed and configured
           - The "jetstack" repository is added:
             helm repo add jetstack https://charts.jetstack.io
           - The cert-manager and trust-manager charts are installed via helm
    
    Also review the 'Prerequisites for Cloud Scale Technology upgrade' section in the
    'NetBackup™ Deployment Guide for Kubernetes Clusters' document for additional required steps.
    
    Once everything is ready, you can safely continue with the upgrade.
    {Component: CloudScale, ComponentName: Upgrade of CloudScale}
    INFO: 2025/09/26 12:38:50 logger.go:132: Would you like to continue? (y/n):  {Component: CloudScale, ComponentName: Upgrade of CloudScale}
    INFO: 2025/09/26 12:39:00 logger.go:132: Helm Version: version.BuildInfo{Version:"v3.18.6", GitCommit:"b76a950f6835474e0906b96c9ec68a2eff3a6430", GitTreeState:"clean", GoVersion:"go1.24.6"}
    {Component: Precheck Config, ComponentName: Installation of CloudScale}
    INFO: 2025/09/26 12:39:00 logger.go:132: kubectl Version: Client Version: v1.34.1
    Kustomize Version: v5.7.1
    Server Version: v1.32.6
    {Component: Precheck Config, ComponentName: Installation of CloudScale}
    INFO: 2025/09/26 12:39:00 logger.go:132: **************************************************
    {Component: CloudScale Upgrade, ComponentName: Input Configuration}
    INFO: 2025/09/26 12:39:00 logger.go:129:
    Checking if the input file already exists.
    
    INFO: 2025/09/26 12:39:00 logger.go:132: Data is being read from an input file that is present.
    {Component: CloudScale, ComponentName: Input Configuration}
    INFO: 2025/09/26 12:39:00 logger.go:132: The following values were loaded from the file:
    {Component: CloudScale Upgrade, ComponentName: Input Configuration}
    

Plugin upgrade interrupted or skipped validation checks
  • The plugin was cancelled while checking for Cert Manager installation. When the upgrade was triggered again, it skipped the Cert Manager validation step and proceeded to the next question instead of restarting the validation process.

    Workaround:

    Delete the following file and re-run the upgrade:

    rm /home/<user-name>/.cloudscale/upgrade.csv

  • The plugin was cancelled before completing the operator upgrade. When the upgrade was triggered again, it failed due to the Helm release being stuck in a pending-upgrade state with the following error:

    Operator Namespace : netbackup-operator-system
    Upgrade of operators started...
    Helm upgrade of operators failed with error: another operation (install/upgrade/rollback) is in progress
    Operators chart failed to upgrade.
    Error while upgrading operators chart : another operation (install/upgrade/rollback) is in progress
    Upgrade failed:

    Workaround:

    To resolve this issue:

    • Check for pending Helm releases:

      helm ls -A --pending

    • If the operator's release is in a pending-upgrade state, list previous revisions:

      helm history operators --namespace <operator_namespace>

    • Identify a revision with the status deployed or superseded and roll back to that version:

      helm rollback operators <REVISION> --namespace <operator_namespace>

      This clears the pending-upgrade lock caused by the interrupted upgrade.

    • Once the rollback completes, rerun the plugin upgrade command:

      kubectl-cloudscale upgrade

    • When prompted to resume the installation, type yes to continue from where it stopped.

Plugin crash after operator upgrade

The plugin crashed while annotating the environment resources immediately after the operator upgrade.

Workaround:

  1. Trigger the upgrade process again using the kubectl-cloudscale plugin:

    kubectl-cloudscale upgrade

  2. When prompted to resume the installation, type yes to continue the upgrade from where it stopped.

Plugin crash before Helm triggers Cloud Scale upgrade

If the plugin crashes at any point before Helm triggers the Cloud Scale upgrade command, the upgrade process is interrupted and cannot complete successfully.

Workaround:

  1. Trigger the upgrade process using the kubectl-cloudscale plugin:

    kubectl-cloudscale upgrade

  2. When prompted to resume the installation, type yes to continue the upgrade from where it stopped.

Media server pod restarting due to read-only mount

The media server pod is continuously restarting. Upon inspection, it is observed that the mount point /mnt/nblogs inside the media pod is in a read-only state, which is likely causing the restarts.

Perform the following steps to verify if the mount point is read-only

  1. Exec the following command into the media pod:

    kubectl exec -it <media-pod-name> -- bash

  2. Run the following command to check the mount point permissions:

    cd /mnt/nblogs/fluentbit

    touch test

    Expected output (in case of an issue):

    touch: cannot touch 'test': Read-only file system

    The above error message confirms that the /mnt/nblogs mount is mounted as read-only.

Workaround:

Run the following command to manually restart the media pod by deleting the affected media pod:

kubectl delete pod <media-pod-name>

Image validation fails after providing image tag input

Image validation can fail due to one or more of the following reasons:

  • Invalid container registry name.

    An incorrect or non-existent registry name was provided.

    Example:

    Image validation failed, failed to configure transport: error pinging v2 registry: 
    Get "https://wrong.azurecr.io/v2/": dial tcp: lookup wrong.azurecr.io on 168.63.129.16:53: no such host
  • Container registry not logged in on the host VM.

    The container registry is not logged in using the same user account that runs the plugin on the host VM.

    Example:

    Image validation failed, 
    Get "https://CloudscaleACRshantaram.azurecr.io/v2/netbackup/dbm/manifests/11.1.0.2-0013": 
    unauthorized: authentication required
  • Incorrect or non-existent image tags.

    The specified image tags do not exist or have not been pushed to the container registry.

    Example:

     Image validation failed, no such manifest: 
    nbuk8sreg.azurecr.io/netbackup/dbm:wrong

Workaround: Ensure that all inputs are correct and that the container registry is logged in properly.

To skip image validation for tags, run the following command:

kubectl create configmap cs-image-validation-config -n <netbackup-namespace> --from-literal=SKIP_IMAGE_VALIDATION=true