NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster

Last Published:
Product(s): NetBackup & Alta Data Protection (10.1.1)
  1. Introduction to NetBackup on EKS
    1.  
      About NetBackup deployment on Amazon Elastic Kubernetes (EKS) cluster
    2.  
      Required terminology
    3.  
      User roles and permissions
    4.  
      About MSDP Scaleout
    5.  
      About MSDP Scaleout components
    6.  
      Limitations in MSDP Scaleout
  2. Deployment with environment operators
    1. About deployment with the environment operator
      1.  
        Prerequisites
      2.  
        Contents of the TAR file
      3.  
        Known limitations
    2.  
      Deploying the operators manually
    3.  
      Deploying NetBackup and MSDP Scaleout manually
    4.  
      Deploying NetBackup and Snapshot Manager manually
    5.  
      Configuring the environment.yaml file
    6.  
      Uninstalling NetBackup environment and the operators
    7.  
      Applying security patches
  3. Assessing cluster configuration before deployment
    1.  
      How does the webhook validation works
    2.  
      Webhooks validation execution details
    3.  
      How does the Config-Checker utility work
    4.  
      Config-Checker execution and status details
  4. Deploying NetBackup
    1.  
      Preparing the environment for NetBackup installation on EKS
    2.  
      Recommendations of NetBackup deployment on EKS
    3.  
      Limitations of NetBackup deployment on EKS
    4. About primary server CR and media server CR
      1.  
        After installing primary server CR
      2.  
        After Installing the media server CR
    5.  
      Monitoring the status of the CRs
    6.  
      Updating the CRs
    7.  
      Deleting the CRs
    8.  
      Configuring NetBackup IT Analytics for NetBackup deployment
    9.  
      Managing NetBackup deployment using VxUpdate
    10.  
      Migrating the node group for primary or media servers
  5. Upgrading NetBackup
    1.  
      Preparing for NetBackup upgrade
    2.  
      Upgrading NetBackup operator
    3.  
      Upgrading NetBackup application
    4.  
      Upgrade NetBackup from previous versions
    5.  
      Procedure to rollback when upgrade fails
  6. Deploying Snapshot Manager
    1.  
      Overview
    2.  
      Prerequisites
    3.  
      Installing the docker images
  7. Migration and upgrade of Snapshot Manager
    1.  
      Migration and updating of Snapshot Manager
  8. Deploying MSDP Scaleout
    1.  
      Deploying MSDP Scaleout
    2.  
      Prerequisites
    3.  
      Installing the docker images and binaries
    4.  
      Initializing the MSDP operator
    5.  
      Configuring MSDP Scaleout
    6.  
      Using MSDP Scaleout as a single storage pool in NetBackup
    7.  
      Configuring the MSDP cloud in MSDP Scaleout
  9. Upgrading MSDP Scaleout
    1.  
      Upgrading MSDP Scaleout
  10. Monitoring NetBackup
    1.  
      Monitoring the application health
    2.  
      Telemetry reporting
    3.  
      About NetBackup operator logs
    4.  
      Expanding storage volumes
    5.  
      Allocating static PV for Primary and Media pods
  11. Monitoring MSDP Scaleout
    1.  
      About MSDP Scaleout status and events
    2.  
      Monitoring with Amazon CloudWatch
    3.  
      The Kubernetes resources for MSDP Scaleout and MSDP operator
  12. Monitoring Snapshot Manager deployment
    1.  
      Overview
    2.  
      Logs of Snapshot Manager
    3.  
      Configuration parameters
  13. Managing the Load Balancer service
    1.  
      About the Load Balancer service
    2.  
      Notes for Load Balancer service
    3.  
      Opening the ports from the Load Balancer service
  14. Performing catalog backup and recovery
    1.  
      Backing up a catalog
    2.  
      Restoring a catalog
  15. Managing MSDP Scaleout
    1.  
      Adding MSDP engines
    2.  
      Adding data volumes
    3. Expanding existing data or catalog volumes
      1.  
        Manual storage expansion
    4.  
      MSDP Scaleout scaling recommendations
    5. MSDP Cloud backup and disaster recovery
      1.  
        About the reserved storage space
      2.  
        Cloud LSU disaster recovery
    6.  
      MSDP multi-domain support
    7.  
      Configuring Auto Image Replication
    8. About MSDP Scaleout logging and troubleshooting
      1.  
        Collecting the logs and the inspection information
  16. About MSDP Scaleout maintenance
    1.  
      Pausing the MSDP Scaleout operator for maintenance
    2.  
      Logging in to the pods
    3.  
      Reinstalling MSDP Scaleout operator
    4.  
      Migrating the MSDP Scaleout to another node group
  17. Uninstalling MSDP Scaleout from EKS
    1.  
      Cleaning up MSDP Scaleout
    2.  
      Cleaning up the MSDP Scaleout operator
  18. Uninstalling Snapshot Manager
    1.  
      Uninstalling Snapshot Manager from EKS
  19. Troubleshooting
    1.  
      View the list of operator resources
    2.  
      View the list of product resources
    3.  
      View operator logs
    4.  
      View primary logs
    5.  
      Pod restart failure due to liveness probe time-out
    6.  
      Socket connection failure
    7.  
      Resolving an invalid license key issue
    8.  
      Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
    9.  
      Resolving the issue where the NetBackup server pod is not scheduled for long time
    10.  
      Resolving an issue where the Storage class does not exist
    11.  
      Resolving an issue where the primary server or media server deployment does not proceed
    12.  
      Resolving an issue of failed probes
    13.  
      Resolving token issues
    14.  
      Resolving an issue related to insufficient storage
    15.  
      Resolving an issue related to invalid nodepool
    16.  
      Resolving a token expiry issue
    17.  
      Resolve an issue related to KMS database
    18.  
      Resolve an issue related to pulling an image from the container registry
    19.  
      Resolving an issue related to recovery of data
    20.  
      Check primary server status
    21.  
      Pod status field shows as pending
    22.  
      Ensure that the container is running the patched image
    23.  
      Getting EEB information from an image, a running container, or persistent data
    24.  
      Resolving the certificate error issue in NetBackup operator pod logs
    25.  
      Resolving the primary server connection issue
    26.  
      Primary pod is in pending state for a long duration
    27.  
      Host mapping conflict in NetBackup
    28.  
      NetBackup messaging queue broker take more time to start
    29.  
      Local connection is getting treated as insecure connection
    30.  
      Issue with capacity licensing reporting which takes longer time
    31.  
      Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
    32.  
      Wrong EFS ID is provided in environment.yaml file
    33.  
      Primary pod is in ContainerCreating state
    34.  
      Webhook displays an error for PV not found
  20. Appendix A. CR template
    1.  
      Secret
    2.  
      MSDP Scaleout CR

Preparing the environment for NetBackup installation on EKS

Ensure that the following prerequisites are met before proceeding with the deployment:

EKS-specific requirements

  1. Create a Kubernetes cluster with the following guidelines:
    • Use Kubernetes version 1.21 onwards.

    • AWS default CNI is used during cluster creation.

    • Create a nodegroup with only one availability zone and instance type should be of at least m5.4xlarge configuration and select the size of attached EBS volume for each node more than 100 GB.

      Note:

      Using separate nodegroups is required for NetBackup servers and MSDP deployments. If more than one mediaServer objects are created then they should use separate nodegroups.

      The nodepool uses AWS manual or autoscaling group feature which allows your nodepool to scale by provisioning and de-provisioning the nodes as required automatically.

      Note:

      All the nodes in node group must be running on the Linux operating system.

    • Minimum required policies in IAM role:

      • AmazonEKSClusterPolicy

      • AmazonEKSWorkerNodePolicy

      • AmazonEC2ContainerRegistryReadOnly

      • AmazonEKS_CNI_Policy

      • AmazonEKSServicePolicy

  2. Use an existing AWS Elastic Container Registry or create a new one and ensure that the EKS has full access to pull images from the elastic container registry.
  3. A dedicated node pool for NetBackup must be created with manual scaling or Autoscaling enabled in Amazon Elastic Kubernetes Services cluster. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.

    The following table lists the node configuration for the primary and media servers.

    Node type

    D16ds v4

    Disk type

    P30

    vCPU

    16

    RAM

    64 GiB

    Total disk size per node (TiB)

    1 TB

    Number of disks/node

    1

    Cluster storage size

    Small (4 nodes)

    4 TB

    Medium (8 nodes)

    8 TB

    Large (16 nodes)

    16 TB

  4. Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.

    Following is the minimum configuration required for Snapshot Manager data plane node pool:

    Node type

    t3.large

    RAM

    8 GB

    Number of nodes

    Minimum 1 with auto scaling enabled.

    Maximum pods per node

    Number of IPs required for Snapshot Manager data pool, must be greater than:

    the number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)

    Number of IPs required for Snapshot Manager control pool, must be greater than:

    number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)

    Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:

    • For 2 CPU's and 8 GB RAM node configuration:

      CPU

      More than 2 CPU's

      RAM

      8 GB

      Maximum pods per node

      Number of IPs required for Snapshot Manager data pool, must be greater than:

      number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)

      Number of IPs required for Snapshot Manager control pool, must be greater than:

      number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 8 jobs per node at once.

    • For 2/4/6 CPU's and 16 GB RAM node configuration:

      CPU

      More than 2/4/6 CPU's

      RAM

      16 GB

      Maximum pods per node

      6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more

      Autoscaling enabled

      Minimum number =1 and Maximum = 3

      Note:

      Above configuration will run 16 jobs per node at once.

  5. Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.

    Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.

    To use this functionality, user must create the node group with the following detail:

    • Add a label with certain key value. For example key = nbpool, value = nbnodes

    • Add a taint with the same key and value which is used for label in above step with effect as NoSchedule.

      For example, key = nbpool, value = nbnodes, effect = NoSchedule

      Provide these details in the operator yaml as follows. To update the toleration and node selector for operator pod,

      Edit the operator/patch/operator_patch.yaml file. Provide the same label key:value in node selector section and in toleration sections. For example,

      nodeSelector:
      			nbpool: nbnodes
      			# Support node taints by adding pod tolerations equal to the specified nodeSelectors
      			# For Toleartion NODE_SELECTOR_KEY used as a key and NODE_SELECTOR_VALUE as a value.
      		tolerations:
      			- key: nbpool
      			operator: "Equal"
      			value: nbnodes
      

      Update the same label key:value as labelKey and labelValue in nodeselector section in environment.yaml file.

  6. Deploy aws load balancer controller add-on in the cluster.

    For more information on installing the add-on, see Installing the AWS Load Balancer Controller add-on.

  7. Install cert-manager by using the following command:

    $ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

    For more information, see Documentation for cert-manager installation.

  8. The FQDN that will be provided in primary server CR and media server CR specifications in networkLoadBalancer section must be DNS resolvable to the provided IP address.
  9. Amazon Elastic File System (Amazon EFS) for shared persistence storage. To create EFS for primary server, see Create your Amazon EFS file system.

    EFS configuration can be as follow and user can update Throughput mode as required:

    Performance mode:  General Purpose

    Throughput mode: Provisioned (256 MiB/s)

    Availability zone: Regional

    Note:

    Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.

    Note:

    To install the add-on in the cluster, ensure that you install the Amazon EFS CSI driver. For more information on installing the Amazon EFS CSI driver, see Amazon EFS CSI driver.

  10. Create a storage class with Managed disc storage type with efs.csi.aws.com and allows volume expansion. It must be in LRS category with Premium SSD. It is recommended that the storage class has, Retain reclaim. Such storage class can be used for primary server as it supports Amazon Elastic File System storage only for catalog volume.

    For more information on Amazon Elastic File System, see Create your Amazon EFS file system.

    For example,

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap
      fileSystemId: fs-92107410
      directoryPerms: "700"
      gidRangeStart: "1000" # optional
      gidRangeEnd: "2000" # optional
      basePath: "/dynamic_provisioning" # optional
    
  11. If NetBackup client is outside VPC or if you want to access the WEB UI from outside VPC then NetBackup client CIDR must be added with all NetBackup ports in security group inbound rule of cluster. See About the Load Balancer service. for more information on NetBackup ports.
    • To obtain the cluster security group, run the following command:

      aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId

    • The following link helps to add inbound rule to the security group:

      Add rules to a security group

  12. Create a storage class with EBS storage type with allowVolumeExpansion = true and ReclaimPolicy=Retain. This storage class is to be used for data and log for both primary and media servers.

    For example,

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap
      fileSystemId: <EFS ID>
      directoryPerms: "700"
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    

    Note:

    Ensure that you install the Amazon EBS CSI driver to install the add-on in the cluster. For more information on installing the Amazon EBS CSI driver, see Managing the Amazon EBS CSI driver as an Amazon EKS add-on and Amazon EBS CSI driver.

  13. The EFS based PV must be specified for Primary server catalog volume with ReclaimPolicy=Retain.

Host-specific requirements

  1. Install AWS CLI.

    For more information on installing the AWS CLI, see Installing or updating the latest version of the AWS CLI.

  2. Install Kubectl CLI.

    For more information on installing the Kubectl CLI, see Installing kubectl.

  3. Configure docker to enable the push of the container images to the container registry.
  4. Create the OIDC provider for the AWS EKS cluster.

    For more information on creating the OIDC provider, see Create an IAM OIDC provider for your cluster.

  5. Create an IAM service account for the AWS EKS cluster.

    For more information on creating an IAM service account, see Amazon EFS CSI driver.

  6. If an IAM role needs an access to the EKS cluster, run the following command from the system that already has access to the EKS cluster:

    kubectl edit -n kube-system configmap/aws-auth

    For more information on creating an IAM role, see Enabling IAM user and role access to your cluster.

  7. Login to the AWS environment to access the Kubernetes cluster by running the following command on AWS CLI:

    aws eks --region <region_name> update-kubeconfig --name <cluster_name>

  8. Free space of approximately 8.5GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the /var/lib/docker location so that the images can be loaded to the docker cache, before being pushed to the container registry.
  9. AWS EFS-CSI driver should be installed for static PV/PVC creation of primary catalog volume.