Cohesity Cloud Scale Technology Manual Deployment Guide for Kubernetes Clusters

Last Published:
Product(s): NetBackup (11.1.0.2)
  1. Introduction
    1. About Cloud Scale deployment
      1.  
        Decoupling of NetBackup web services from primary server
      2.  
        Decoupling of NetBackup Policy and Job Management from primary server
      3.  
        Decoupling of NetBackup Database Manager from primary server
      4.  
        Logging feature (fluentbit) in Cloud Scale
    2.  
      About NetBackup Snapshot Manager
    3.  
      Required terminology
    4.  
      User roles and permissions
  2. Section I. Configurations
    1. Prerequisites
      1.  
        Preparing the environment for NetBackup installation on Kubernetes cluster
      2.  
        Prerequisites for Snapshot Manager (AKS/EKS)
      3. Prerequisites for Kubernetes cluster configuration
        1.  
          Config-Checker utility
        2.  
          Data-Migration for AKS
        3.  
          Webhooks validation for EKS
      4. Prerequisites for Cloud Scale configuration
        1.  
          Cluster specific settings
        2.  
          Cloud specific settings
      5.  
        Prerequisites for deploying environment operators
      6.  
        Prerequisites for using private registry
    2. Recommendations and Limitations
      1.  
        Recommendations of NetBackup deployment on Kubernetes cluster
      2.  
        Limitations of NetBackup deployment on Kubernetes cluster
      3.  
        Recommendations and limitations for Cloud Scale deployment
    3. Configurations
      1.  
        Contents of the TAR file
      2.  
        Configuring the cloudscale-values.yaml file
      3.  
        Configuring an External Certificate Authority for Web UI port 443
      4. Loading docker images
        1.  
          Installing the docker images for NetBackup
        2.  
          Installing the docker images for Snapshot Manager
        3.  
          Installing the docker images and binaries for MSDP Scaleout
      5. Configuring NetBackup
        1. Primary and media server CR
          1.  
            After installing primary server CR
          2.  
            After Installing the media server CR
        2.  
          Elastic media server
    4. Configuration of key parameters in Cloud Scale deployments
      1.  
        Tuning touch files
      2.  
        Setting maximum jobs per client
      3.  
        Setting maximum jobs per media server
      4.  
        Enabling intelligent catalog archiving
      5.  
        Enabling security settings
      6.  
        Configuring email server
      7.  
        Reducing catalog storage management
      8.  
        Configuring zone redundancy
      9.  
        Enabling client-side deduplication capabilities
      10.  
        Parameters for logging (fluentbit)
      11.  
        Managing media server configurations in Web UI
  3. Section II. Deployment
    1. Deploying Cloud Scale
      1.  
        How to deploy Cloud Scale
      2.  
        Prerequisites for Cloud Scale deployment
      3.  
        Deploying the operators
      4. Deploying Cloud Scale using Helm chart
        1.  
          Installing Cloud Scale environment
        2. Single node Cloud Scale Technology deployment
          1.  
            Steps to deploy Cloud Scale in single node
      5.  
        Deploying Cloud Scale using kubectl plugin
      6.  
        Verifying Cloud Scale deployment
      7. Post Cloud Scale deployment tasks
        1.  
          Restarting Cloud Scale Technology services
  4. Section III. Monitoring and Management
    1. Monitoring NetBackup
      1.  
        Monitoring the application health
      2.  
        Telemetry reporting
      3.  
        About NetBackup operator logs
      4.  
        Monitoring Primary/Media server CRs
      5.  
        Expanding storage volumes
      6. Allocating static PV for Primary and Media pods
        1.  
          Expanding log volumes for primary pods
        2.  
          Recommendation for media server volume expansion
        3.  
          (AKS-specific) Allocating static PV for Primary and Media pods
        4.  
          (EKS-specific) Allocating static PV for Primary and Media pods
    2. Monitoring Snapshot Manager
      1.  
        Overview
      2.  
        Configuration parameters
      3.  
        Snapshot Manager manual certificate renewal in Cloud Scale
    3. Monitoring fluentbit
      1.  
        Monitoring fluentbit for logging
    4. Monitoring MSDP Scaleout
      1.  
        About MSDP Scaleout status and events
      2.  
        Monitoring with Amazon CloudWatch
      3.  
        Monitoring with Azure Container insights
      4.  
        The Kubernetes resources for MSDP Scaleout and MSDP operator
    5. Managing NetBackup
      1.  
        Managing NetBackup deployment using VxUpdate
      2.  
        Updating the Primary/Media server CRs
      3.  
        Migrating the cloud node for primary or media servers
      4.  
        Migrating cpServer controlPlane node
    6. Managing the Load Balancer service
      1.  
        About the Load Balancer service
      2.  
        Notes for Load Balancer service
      3.  
        Opening the ports from the Load Balancer service
      4.  
        Steps for upgrading Cloud Scale from multiple media load balancer to none
    7. Managing PostrgreSQL DBaaS
      1.  
        Changing database server password in DBaaS
      2.  
        Updating database certificate in DBaaS
    8. Managing logging
      1.  
        Viewing NetBackup logs
      2.  
        Extracting NetBackup logs
    9. Performing catalog backup and recovery
      1.  
        Backing up a catalog
      2. Restoring a catalog
        1.  
          Primary server corrupted
        2.  
          MSDP-X corrupted
        3.  
          MSDP-X and Primary server corrupted
  5. Section IV. Maintenance
    1. PostgreSQL DBaaS Maintenance
      1.  
        Configuring maintenance window for PostgreSQL database in AWS
      2.  
        Setting up alarms for PostgreSQL DBaaS instance
    2. Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
      1.  
        Overview
      2.  
        Patching of primary containers
      3.  
        Patching of media containers
      4.  
        Patching of fluentbit collector pods
      5.  
        Update containerized PostgreSQL pod
    3. Upgrading
      1. Upgrading Cloud Scale Technology
        1.  
          Prerequisites for Cloud Scale Technology upgrade
        2.  
          Upgrade the cluster
        3. Upgrade Cloud Scale
          1.  
            Upgrade Cloud Scale using the kubectl plugin
          2.  
            Manual upgrade of Cloud Scale using the Superchart
        4.  
          Configuring NetBackup IT Analytics for NetBackup deployment
    4. Cloud Scale Disaster Recovery
      1.  
        Cluster backup
      2.  
        Environment backup
      3.  
        Cluster recovery
      4.  
        Cloud Scale recovery
      5.  
        Environment Disaster Recovery
      6.  
        DBaaS Disaster Recovery
    5. Uninstalling
      1.  
        Uninstalling Cloud Scale Technology
    6. Troubleshooting
      1. Troubleshooting AKS and EKS issues
        1.  
          View the list of operator resources
        2.  
          View the list of product resources
        3.  
          View operator logs
        4.  
          View primary logs
        5.  
          Socket connection failure
        6.  
          Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
        7.  
          Resolving the issue where the NetBackup server pod is not scheduled for long time
        8.  
          Resolving an issue where the Storage class does not exist
        9.  
          Resolving an issue where the primary server or media server deployment does not proceed
        10.  
          Resolving an issue of failed probes
        11.  
          Resolving issues when media server PVs are deleted
        12.  
          Resolving an issue related to insufficient storage
        13.  
          Resolving an issue related to invalid nodepool
        14.  
          Resolve an issue related to KMS database
        15.  
          Resolve an issue related to pulling an image from the container registry
        16.  
          Resolving an issue related to recovery of data
        17.  
          Check primary server status
        18.  
          Pod status field shows as pending
        19.  
          Ensure that the container is running the patched image
        20.  
          Getting EEB information from an image, a running container, or persistent data
        21.  
          Resolving the certificate error issue in NetBackup operator pod logs
        22.  
          Pod restart failure due to liveness probe time-out
        23.  
          NetBackup messaging queue broker take more time to start
        24.  
          Host mapping conflict in NetBackup
        25.  
          Issue with capacity licensing reporting which takes longer time
        26.  
          Local connection is getting treated as insecure connection
        27.  
          Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
        28.  
          Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
        29.  
          Taint, Toleration, and Node affinity related issues in cpServer
        30.  
          Operations performed on cpServer in cloudscale-values.yaml file are not reflected
        31.  
          Elastic media server related issues
        32.  
          Failed to register Snapshot Manager with NetBackup
        33.  
          Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
        34.  
          Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
        35.  
          Request router logs
        36.  
          Issues with NBPEM/NBJM
        37.  
          Issues with logging feature for Cloud Scale
        38.  
          The flexsnap-listener pod is unable to communicate with RabbitMQ
        39.  
          Job remains in queue for long time
        40.  
          Extracting logs if the nbwsapp or log-viewer pods are down
        41.  
          Helm installation failed with bundle error
        42.  
          Deployment fails with private container registry and Postgres fails to pull the images
      2. Troubleshooting AKS-specific issues
        1.  
          Data migration unsuccessful even after changing the storage class through the storage yaml file
        2.  
          Host validation failed on the target host
        3.  
          Primary pod goes in non-ready state
      3. Troubleshooting EKS-specific issues
        1.  
          Resolving the primary server connection issue
        2.  
          NetBackup Snapshot Manager deployment on EKS fails
        3.  
          Wrong EFS ID is provided in cloudscale-values.yaml file
        4.  
          Primary pod is in ContainerCreating state
        5.  
          Webhook displays an error for PV not found
        6.  
          Cluster Autoscaler initialization issue
        7.  
          Catalog backup job fails with an error (Status 9202)
      4.  
        Troubleshooting issue for bootstrapper pod
      5.  
        Troubleshooting issues for kubectl plugin
  6. Appendix A. CR template
    1.  
      Secret
    2. MSDP Scaleout CR
      1.  
        MSDP Scaleout CR template for AKS
      2.  
        MSDP Scaleout CR template for EKS
  7. Appendix B. MSDP Scaleout
    1.  
      About MSDP Scaleout
    2.  
      Prerequisites for MSDP Scaleout (AKS\EKS)
    3.  
      Limitations in MSDP Scaleout
    4. MSDP Scaleout configuration
      1.  
        Initializing the MSDP operator
      2.  
        Configuring MSDP Scaleout
      3.  
        Configuring the MSDP cloud in MSDP Scaleout
      4.  
        Using MSDP Scaleout as a single storage pool in NetBackup
      5.  
        Using S3 service in MSDP Scaleout
      6.  
        Enabling MSDP S3 service after MSDP Scaleout is deployed
    5.  
      Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
    6.  
      Deploying MSDP Scaleout
    7. Managing MSDP Scaleout
      1.  
        Adding MSDP engines
      2.  
        Adding data volumes
      3. Expanding existing data or catalog volumes
        1.  
          Manual storage expansion
      4.  
        MSDP Scaleout scaling recommendations
      5. MSDP Cloud backup and disaster recovery
        1.  
          About the reserved storage space
        2. Cloud LSU disaster recovery
          1.  
            Recovering MSDP S3 IAM configurations from cloud LSU
      6.  
        MSDP multi-domain support
      7.  
        Configuring Auto Image Replication
      8. About MSDP Scaleout logging and troubleshooting
        1.  
          Collecting the logs and the inspection information
    8. MSDP Scaleout maintenance
      1.  
        Pausing the MSDP Scaleout operator for maintenance
      2.  
        Logging in to the pods
      3.  
        Reinstalling MSDP Scaleout operator
      4.  
        Migrating the MSDP Scaleout to another node pool

Environment Disaster Recovery

  1. Ensure that the Cloud Scale deployment has been cleaned up in the cluster.

    Perform the following to verify the cleanup process:

    • Ensure that the namespace associated with Cloud Scale deployment are deleted by using the following command:

      kubectl get ns

    • Confirm that storageclass, pv, clusterroles, clusterrolebindings, crd's associated with Cloud Scale deployment are deleted by using the following command:

      kubectl get sc,pv,pvc

  2. (For EKS) If deployment is in different AZ, update the subnet name in cloudscale-values.yaml file.

    For example, if earlier subnet name was subnet-az1 and new subnet is subnet-az2, then in cloudscale-values.yaml file, there would be a section for loadBalancerAnnotations as follows:

    loadBalancerAnnotations:
               service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az1

    Update the name to new subnet name as follows:

    loadBalancerAnnotations:
               service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az2

    Update all IPs used for Primary, MSDP, Media and Snapshot Manager server in respective section.

    Note:

    Change of FQDN is not supported.

    The following example shows how to change the IP for Primary server:

    Old entry in cloudscale-values.yaml file:

    ipList:
                - ipAddr: 12.123.12.123
                  fqdn: primary.netbackup.com

    Update the above old entry as follows:

    ipList:
                - ipAddr: 34.245.34.234
                  fqdn: primary.netbackup.com

    Similarly perform the above given procedure in the example (Primary server) for MSDP, Media and Snapshot Manager server.

  3. Ensure that the iplist listed in Primary, Media, MSDP and Snapshot Manager server sections of cloudscale-values.yaml file that was saved during backup must be free and resolvable. If deployment is in different AZ, then FQDN must be same, but IP can be changed, hence ensure that same FQDN's can map to different IP.

  4. (For EKS) Update spec > priamryServer > storage > catalog > storageClassName with new EFS ID which is created for primary in cloudscale-values.yaml file.

  5. Ensure that nodeSelector is present in the cloudscale-values.yaml and operators-values.yaml files that were noted down during backup must be present in the cluster with required configurations.

  6. Perform the steps in the following section for deploying DBaaS:

    See DBaaS Disaster Recovery.

  7. Verify that the values in secret_backup.yaml, storageclass_backup.yaml, CPServerLog_storageclass_backup.yaml, and msdpopstorageclass_backup.yaml are correct. If a non-default StorageClass or updated passwords were used during deployment, ensure that these updated values are also included in cloudscale-values.yaml file.

    (For DBaaS) Confirm that the values in secretproviderclass_backup.yaml file are correctly reflected under global → dbsecretProviderClass in the backup cloudscale-values.yaml file, and update the admin secret ARN in cloudscale-values.yaml file with the new ARN provided in the AWS console.

  8. Install cert-manager (while restore cert-manager version should be same as given during backup):

    helm repo add jetstack https://charts.jetstack.io --force-update

    helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager --version 1.18.2 --set webhook.timeoutSeconds=30 --set installCRDs=true --wait --create-namespace

  9. Create namespace that is present in cloudscale-values.yaml file:

    kubectl create ns netbackup

  10. Install trust-manager (while restore trust-manager version should be same as given while backup setup has been deployed):

    kubectl create namespace trust-manager

    helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.19.0 --wait

  11. (For EKS) Update the EFS ID in the backed-up nb-file-premium StorageClass to the new EFS ID. Then, install the operator using the operator-values.yaml file that was backed up during the disaster recovery backup procedure:

    helm upgrade --install operators operators-<version>.tgz -f operator-values.yaml --create-namespace --namespace netbackup-operator-system

  12. (Required only for DBaaS deployment) Snapshot Manager restore steps:

    For AKS

    • Navigate to the snapshot resource created during backup and Create a disk under the recovered cluster infra resource group (for example, MC_<clusterRG>_<cluster name>_<cluster_region>).

    • Note down the resource ID of this disk (navigate to the Properties of the disk). It can be obtained from portal/az cli.

      Format of resource ID:/subscriptions/<subscription id>/resourceGroups/<MC_<clusterRG>_<cluster name>_<cluster_region>/providers/Microsoft.Compute/disks>/<disk name>

    • Create static PV using the resource ID of backed up disk. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in pgsql-pv.yaml file and apply the yaml:

      pgsql-pv.yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: <pv name>
      spec:
        capacity:
          storage: <size of the disk>
        accessModes:
          - ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        storageClassName: <storage class name>
        claimRef:
          name: psql-pvc
          namespace: <environment namespace>
        csi:
          driver: disk.csi.azure.com
          readOnly: false
          volumeHandle: <Resorce ID of the Disk>

      Example of pgsql-pv.yaml file:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: psql-pv
      spec:
        capacity:
          storage: 30Gi
        accessModes:
          - ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        storageClassName: gp2-immediate
        claimRef:
          name: psql-pvc
          namespace: nbux
        csi:
          driver: disk.csi.azure.com
          readOnly: false
          volumeHandle: /subscriptions/a332d749-22d8-48f6-9027-ff04b314e840/resourceGroups/MC_vibha-vasantraohadule-846288_auto_aks-vibha-vasantraohadule-846288_eastus2/providers/Microsoft.Compute/disks/psql-disk
      

      Create psql-pv using the following command:

      kubectl apply -f <path_to_psql_pv.yaml> -n <environment-namespace>

    • Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:

      kubectl get pv | grep psql-pvc

      >> psql-pv 30Gi RWO managed-premium-disk Available nbu/psql-pvc 50s

    For EKS

    • Navigate to the EC2 > Snapshots in AWS Console and click on the Create volume from the snapshot (expand the Actions drop down) which is taken in backup step 2 in same availability zone (AZ) of volume attached to psql-pvc (mentioned in step 1 of backup steps).

      Note down the volumeID (for example, vol-xxxxxxxxxxxxxxx).

    • In case deployment is in different availability zone (AZ), user must change the availability zone (AZ) for volume and update the volumeID accordingly.

    • Create static PV using the backed up volumeID. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in pgsql-pv.yaml file and apply the yaml:

      pgsql-pv.yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: <pv name>
      spec:
        accessModes:
        - ReadWriteOnce
        awsElasticBlockStore:
          fsType: <fs type>
          volumeID: <backed up volumeID>    # append this  aws://az-code/ , for e.g. aws://us-east-2b/ at the starting
        capacity:
          storage: 30Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: psql-pvc
          namespace: <netbackup namespace>
        persistentVolumeReclaimPolicy: Retain
        storageClassName: <storage class name>
        volumeMode: Filesystem

      Sample yaml for pgsql-pv.yaml file:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: psql-pv
      spec:
        accessModes:
        - ReadWriteOnce
        awsElasticBlockStore:
          fsType: ext4
          volumeID: aws://us-east-2b/vol-0d86d2ca38f231ede
        capacity:
          storage: 30Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: psql-pvc
          namespace: nbu
        persistentVolumeReclaimPolicy: Retain
        storageClassName: gp2-immediate
        volumeMode: Filesystem

      Create psql-pv using the following command:

      kubectl apply -f <path_to_psql_pv.yaml> -n <netbackup-namespace>

      kubectl get pv | grep psql-pvc

    • Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:

      kubectl get pv | grep psql-pvc

      >>> psql-pv 30Gi RWO gp2-immediate Available nbu/psql-pvc 50s

  13. Update the cloudscale-values.yaml file as follows:

    • Add paused:true for MSDP, Media Server in cloudscale-vales.yaml file and modify the file as environment→ mediaServers→ paused: true;  environment→ msdpScaleouts→ paused: true

      Note:

      (For DBaaS) Ensure that createServiceAccount is set to false in cloudscale-values.yaml file.

    • Do not install cpServer.

      Create a copy of the cloudscale-values.yaml file and name it cloudscale-values-copy.yaml. Store this copy, as it would be required during the Snapshot Manager installation.

      Remove the entire cpServer section from the cloudscale-values.yaml file, and add disabled: true under the cpServer section.

  14. Install Cloud Scale using the updated cloudscale-values.yaml file that is created in the above step. 

    helm upgrade --install cloudscale cloudscale-<version>.tgz -f cloudscale-values.yaml --create-namespace --namespace netbackup

  15. Once primary erver is up and running, perform the following:

    • Execute kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command to exec into the primary pod.

      Increase the debug logs level on primary server (set VERBOSE = 6 in bp.conf file).

    • Create a directory DRPackages at persisted location using mkdir /mnt/nbdata/usr/openv/drpackage command.

    • Deactivate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health deactivate command.

  16. (For containerized Postgres) Execute the following command in the NetBackup namespace to scale down PostgreSql sts Replicas to 0:

    kubectl scale sts nb-postgresql -n netbackup --replicas=0

    (For DBaaS) Scale down or power off the DBaaS Server/Instance.

  17. Exec into primary pod using kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command and stop the NetBackup services using the following command:

    /usr/openv/netbackup/bin/bp.kill_all

    Check if all processes are terminated correctly using /usr/openv/netbackup/bin/bpps command.

    If some processes remain, use the following command to forcefully terminate them:

    kill -9 <process id>

  18. Perform the following steps for NBATD pod recovery:

    • Create the DRPackages directory on persisted location /mnt/nblogs/ in nbatd pod by executing the following command:

      kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash

      mkdir /mnt/nblogs/DRPackages

    • Copy DR files which were saved when performing DR backup to nbatd pod at /mnt/nblogs/DRPackages using the following command:

      kubectl cp <Path_of_DRPackages_on_host_machine> <nbatd-pod-namespace>/<nbatd-pod-name>:/mnt/nblogs/DRPackages

    • Execute the following steps in the nbatd pod:

      • Execute the kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash command.

      • Deactivate nbatd health probes using the /opt/veritas/vxapp-manage/nbatd_health.sh disable command.

      • Stop the nbatd service using /opt/veritas/vxapp-manage/nbatd_stop.sh 0 command.

      • Execute the /opt/veritas/vxapp-manage/nbatd_identity_restore.sh -infile /mnt/nblogs/DRPackages/ (DR package name) command.

  19. Copy back the earlier copied disaster recovery files to primary pod at /mnt/nbdata/usr/openv/drpackage location using the following command:

    kubectl cp <Path_of_DRPackages_on_host_machine> <primary-pod-namespace>/<primary-pod-name>:/mnt/nbdata/usr/openv/drpackage

  20. Execute the following steps after executing into the primary server pod:

    • Change the ownership of files in /mnt/nbdb/usr/openv/drpackage using the chown nbsvcusr:nbsvcusr <file-name> command.

    • Execute the /usr/openv/netbackup/bin/admincmd/nbhostidentity -import -infile /mnt/nbdb/usr/openv/drpackage/.drpkg command.

    • Clear NetBackup host cache, run the bpclntcmd -clear_host_cache command.

    • Restart the pods as follows:

      • Navigate to the VRTSk8s-netbackup-<version>/scripts folder.

      • Run the cloudscale_restart.sh script with Restart option as follows:

        ./cloudscale_restart.sh <action> <namespace>

        Provide the namespace and the required action:

        stop: Stops all the services under primary server (waits until all the services are stopped).

        start: Starts all the services and waits until the services are up and running under primary server.

        restart: Stops the services and waits until all the services are down. Then starts all the services and waits until the services are up and running.

      Note:

      Ignore if policy job pod does not come up in running state. Policy job pod would start once primary services start.

    • Refresh the certificate revocation list using the /usr/openv/netbackup/bin/nbcertcmd -getcrl command.

  21. Run the primary server reconciler.

    This can be done by changing primary spec's paused field to true using the following command and save it:

    helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.primary.paused=true

    Then, to enable the reconciler to run, the primary's paused field in spec should again be set to false.

    The SHA fingerprint will get updated in the primary CR's status.

  22. Allow Auto reissue certificate from primary for MSDP, Media and Snapshot Manager server from Web UI.

    In Web UI, navigate to Security > Host Mappings > for the MSDP Storage Server, click on the 3 dots on the right > check Allow Auto reissue Certificate. Repeat this for media servers and Snapshot Manager server entries also.

  23. Apply the msdp-cred-secret backed up during the disaster recovery backup.

    Change  paused field to false for MSDP using the following command:

    helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.msdpScaleouts.paused=false

  24. Once MSDP is up and running, add the cloud provider credentials, from where the S3 bucket has been configured as described in "Add a credential in NetBackup' section of the NetBackup™ Web UI Administrator's Guide.

  25. If the LSU cloud alias does not exist, you can use the following command to add it.

    /usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>

    When MSDP Scaleout is up and running, re-use the cloud LSU on NetBackup primary server.

    /usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>

    Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in the previous MSDP Scaleout deployment.

    Configuration file template:

    V7.5 "operation" "reuse-lsu-cloud" string
    V7.5 "lsuName" "LSUNAME" string
    V7.5 "cmsCredName" "XXX" string
    V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string
    V7.5 "lsuCloudBucketName" "XXX" string
    V7.5 "lsuCloudBucketSubName" "XXX" string
    V7.5 "lsuKmsServerName" "XXX" string

    Note:

    For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup Web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.

  26. Change paused = false for media server using the following CLI and wait for media pods to come up and running:

    helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values \>                                 --set environment.mediaServers[0].name=media1 --set environment.mediaServers[0].replicas=1 \>                                 --set environment.mediaServers[0].nodeSelector.labelKey=agentpool \>                                 --set environment.mediaServers[0].nodeSelector.labelValue=mediapool  \>                                 --set environment.mediaServers[0].storage.data.capacity=50Gi \>                                 --set environment.mediaServers[0].storage.data.storageClassName=nb-disk-standardssd \>                                 --set environment.mediaServers[0].storage.log.capacity=5Gi \>                                 --set environment.mediaServers[0].storage.log.storageClassName=nb-disk-standardssd  \>                                 --set environment.mediaServers[0].tag=11.1-0016-DR1 \>                                 --set environment.mediaServers[0].paused=false

  27. Perform full Catalog Recovery:

    Pause the environment reconciler using the helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.paused=true command.

    Use one of the following option to perform catalog recovery:

    • Trigger a Catalog Recovery from the Web UI.

      Or

    • Exec into primary pod and run the bprecover -wizard command.

    If Multi Factor Authentication (MFA) was enabled, perform the following additional steps:

    • Disable MFA: In Web UI, navigate to Global Security Setting > Security Controls > Enforce multifactor authentication and disable it.

    • Exec into primary POD and execute the following command to reset MFA for user provided in primary secret (in this case it is nbuser):

      nbseccmd -resetMFA -userinfo unixpwd:nbuxqa-summiteers-10-244-67-129.vxindia.veritas.com:nbuser

    • Un pause the environment reconciler: helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.paused=false

    • Confirm modified time of access keys are modified to latest once:

      In Web UI, navigate to Security > Access Key, and ensure that the modified Access Keys are created by nb-operator.

  28. Once recovery is completed, restart the NetBackup services by running the following script:

    cloudscale_restart.sh

  29. Activate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health activate command.

  30. Install Snapshot Manager server:

    • Add cpServer section in cloudscale-values.yaml file and ensure that the disabled field is set to false for the cpSever section. 

    • Install cpServer in the environment using the following command:

      helm upgrade --install cloudscale cloudscale-<version>.tgz -f cloudscale-values.yaml --namespace netbackup

    • Wait for the Snapshot Manager pods to come up and in running state.

  31. Validate if the environment is up and running.