NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster
- Introduction to NetBackup on EKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- About primary server CR and media server CR
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from EKS
- Uninstalling Snapshot Manager
- Troubleshooting
- Appendix A. CR template
Preparing the environment for NetBackup installation on EKS
Ensure that the following prerequisites are met before proceeding with the deployment:
EKS-specific requirements
- Create a Kubernetes cluster with the following guidelines:
Use Kubernetes version 1.21 onwards.
AWS default CNI is used during cluster creation.
Create a nodegroup with only one availability zone and instance type should be of at least m5.4xlarge configuration and select the size of attached EBS volume for each node more than 100 GB.
Note:
Using separate nodegroups is required for NetBackup servers and MSDP deployments. If more than one mediaServer objects are created then they should use separate nodegroups.
The nodepool uses AWS manual or autoscaling group feature which allows your nodepool to scale by provisioning and de-provisioning the nodes as required automatically.
Note:
All the nodes in node group must be running on the Linux operating system.
Minimum required policies in IAM role:
AmazonEKSClusterPolicy
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSServicePolicy
- Use an existing AWS Elastic Container Registry or create a new one and ensure that the EKS has full access to pull images from the elastic container registry.
- A dedicated node pool for NetBackup must be created with manual scaling or Autoscaling enabled in Amazon Elastic Kubernetes Services cluster. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.
The following table lists the node configuration for the primary and media servers.
Node type
D16ds v4
Disk type
P30
vCPU
16
RAM
64 GiB
Total disk size per node (TiB)
1 TB
Number of disks/node
1
Cluster storage size
Small (4 nodes)
4 TB
Medium (8 nodes)
8 TB
Large (16 nodes)
16 TB
- Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.
Following is the minimum configuration required for Snapshot Manager data plane node pool:
Node type
t3.large
RAM
8 GB
Number of nodes
Minimum 1 with auto scaling enabled.
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
the number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:
For 2 CPU's and 8 GB RAM node configuration:
CPU
More than 2 CPU's
RAM
8 GB
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 8 jobs per node at once.
For 2/4/6 CPU's and 16 GB RAM node configuration:
CPU
More than 2/4/6 CPU's
RAM
16 GB
Maximum pods per node
6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 16 jobs per node at once.
- Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.
To use this functionality, user must create the node group with the following detail:
Add a label with certain key value. For example key = nbpool, value = nbnodes
Add a taint with the same key and value which is used for label in above step with effect as NoSchedule.
For example, key = nbpool, value = nbnodes, effect = NoSchedule
Provide these details in the operator yaml as follows. To update the toleration and node selector for operator pod,
Edit the
operator/patch/operator_patch.yaml
file. Provide the same label key:value in node selector section and in toleration sections. For example,nodeSelector: nbpool: nbnodes # Support node taints by adding pod tolerations equal to the specified nodeSelectors # For Toleartion NODE_SELECTOR_KEY used as a key and NODE_SELECTOR_VALUE as a value. tolerations: - key: nbpool operator: "Equal" value: nbnodes
Update the same label
key:value
aslabelKey
andlabelValue
in nodeselector section inenvironment.yaml
file.
- Deploy aws load balancer controller add-on in the cluster.
For more information on installing the add-on, see Installing the AWS Load Balancer Controller add-on.
- Install cert-manager by using the following command:
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
For more information, see Documentation for cert-manager installation.
- The FQDN that will be provided in primary server CR and media server CR specifications in networkLoadBalancer section must be DNS resolvable to the provided IP address.
- Amazon Elastic File System (Amazon EFS) for shared persistence storage. To create EFS for primary server, see Create your Amazon EFS file system.
EFS configuration can be as follow and user can update Throughput mode as required:
Performance mode: General Purpose
Throughput mode: Provisioned (256 MiB/s)
Availability zone: Regional
Note:
Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.
Note:
To install the add-on in the cluster, ensure that you install the Amazon EFS CSI driver. For more information on installing the Amazon EFS CSI driver, see Amazon EFS CSI driver.
- Create a storage class with Managed disc storage type with
efs.csi.aws.com
and allows volume expansion. It must be in LRS category with Premium SSD. It is recommended that the storage class has,Retain
reclaim. Such storage class can be used for primary server as it supportsAmazon Elastic File System
storage only for catalog volume.For more information on Amazon Elastic File System, see Create your Amazon EFS file system.
For example,
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: fs-92107410 directoryPerms: "700" gidRangeStart: "1000" # optional gidRangeEnd: "2000" # optional basePath: "/dynamic_provisioning" # optional
- If NetBackup client is outside VPC or if you want to access the WEB UI from outside VPC then NetBackup client CIDR must be added with all NetBackup ports in security group inbound rule of cluster. See About the Load Balancer service. for more information on NetBackup ports.
To obtain the cluster security group, run the following command:
aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId
The following link helps to add inbound rule to the security group:
- Create a storage class with
EBS
storage type withallowVolumeExpansion = true
andReclaimPolicy=Retain
. This storage class is to be used for data and log for both primary and media servers.For example,
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: <EFS ID> directoryPerms: "700" reclaimPolicy: Retain volumeBindingMode: Immediate allowVolumeExpansion: true
Note:
Ensure that you install the Amazon EBS CSI driver to install the add-on in the cluster. For more information on installing the Amazon EBS CSI driver, see Managing the Amazon EBS CSI driver as an Amazon EKS add-on and Amazon EBS CSI driver.
- The EFS based PV must be specified for Primary server catalog volume with
ReclaimPolicy=Retain
.
Host-specific requirements
- Install AWS CLI.
For more information on installing the AWS CLI, see Installing or updating the latest version of the AWS CLI.
- Install Kubectl CLI.
For more information on installing the Kubectl CLI, see Installing kubectl.
- Configure docker to enable the push of the container images to the container registry.
- Create the OIDC provider for the AWS EKS cluster.
For more information on creating the OIDC provider, see Create an IAM OIDC provider for your cluster.
- Create an IAM service account for the AWS EKS cluster.
For more information on creating an IAM service account, see Amazon EFS CSI driver.
- If an IAM role needs an access to the EKS cluster, run the following command from the system that already has access to the EKS cluster:
kubectl edit -n kube-system configmap/aws-auth
For more information on creating an IAM role, see Enabling IAM user and role access to your cluster.
- Login to the AWS environment to access the Kubernetes cluster by running the following command on AWS CLI:
aws eks --region <region_name> update-kubeconfig --name <cluster_name>
- Free space of approximately 8.5GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the
/var/lib/docker
location so that the images can be loaded to the docker cache, before being pushed to the container registry. - AWS EFS-CSI driver should be installed for static PV/PVC creation of primary catalog volume.