NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Cluster recovery
This section describes the procedure for manual recovery of AKS and EKS clusters.
Before performing the recovery of AKS cluster, template must be created.
Template creation
- Modify the
template.jsonfile saved earlier by removing the following sections:Under resources section for cluster > properties:
windowsProfile": { "adminUsername": "<Default username>", "enableCSIProxy": true },identityProfile": { "kubeletidentity": { "resourceId": "[parameters('userAssignedIdentities_<clustername>_<master node pool name>_externalid')]", "clientId": "<CLIENT ID>", "objectId": "<OBJECT ID>" } },
Delete the resource of type
Microsoft.ContainerService/managedClusters/privateEndpointConnections:{ "type": "Microsoft.ContainerService/managedClusters/privateEndpointConnections", "apiVersion": "2023-08-02-preview", . . . }
- If cluster recovering is in different availability zone, then set appropriate zone for the cluster and nodepools in
template.jsonfile."availabilityZones": [ "<zone number>" ],
- Save the
template.jsonfile to be used later for restore.
Note:
As and when Microsoft Azure updates, there may be more changes required in the template. For more information, refer to Use Azure portal to export a template
AKS cluster recovery
- Delete the cluster that needs to be recovered (so that same name, IPs, and so on can be used).
- Re-create cluster using templates saved earlier as follows:
Ensure that you are logged in through Azure CLI with subscription set to where the cluster is present:
az deployment group create --resource-group <resource_group> --name <deployment_name> --template-file template.json
Here,
resource_group: Resource group where cluster is present.
deployment_name: Deployment name is the name with which you may look for the deployment under the deployments section of the resource group in portal.
Once deployment is complete, a successful completion without any errors, with a deployment summary JSON.
Connect to cluster and perform the remaining recovery options through CLI:
Connect to cluster:
az login
az account set --subscription <subscriptionID>
az aks get-credentials --resource-group <resource_group> --name <cluster-name>
kubelogin convert-kubeconfig -l azurecli
Here,
subscriptionID: Subscription ID where cluster is present.
resource_group: Resource group where cluster is present.
cluster-name: Name of the cluster.
Attach ACR:
az aks update -n <cluster-name> -g <resource_group> --attach-acr <ContainerRegistry>
Here,
ContainerRegistry: Name of Azure container registry where images are pushed.
resource_group: Resource group where cluster is present.
cluster-name: Name of the cluster.
Authorize cluster to access Virtual Networks:
If authorization is done through cluster service principal, then perform the following steps:
Get Service Principal ID:
az resource list -n <cluster_name> --query [*].identity.principalId --out tsv
Role assignment:
az role assignment create --assignee <clusterServicePrincipal> --role '<nbux-deployment-role>' --scope /subscriptions/<subscriptionID>/resourceGroups/<resource_group>/providers/Microsoft.Network/virtualNetworks/<Vnet_podips>/subnets/<Subnet_podips>
az role assignment create --assignee <clusterServicePrincipal> --role '<nbux-deployment-role>' --scope /subscriptions/<subscriptionID>/resourceGroups/<resource_group>/providers/Microsoft.Network/virtualNetworks/<Vnet_LoadBalancer>/subnets/<Subnet_LoadBalancer_ips>
Here,
clusterServicePrincipal: Service Principal ID of cluster.
nbux-deployment-role: Custom Role that has necessary permissions for NetBackup deployment.
subscriptionID: Subscription ID where virtual network is present.
resource_group: Resource group where virtual network is present.
If User Managed Identity or System Managed Identity is attached to scale sets then re-attach the same identity to scale sets.
Refresh the credentials of cluster as follows:
az aks get-credentials --resource-group <resource_group> --name <cluster_name>
Here,
resource_group: Resource group where cluster is present.
cluster-name: Name of the cluster.
Note:
In recovery steps keep all commands output files handy. Refer appropriate files saved during backup steps for respective command.
For IAM role and security group, user can refer to files which were created during backup steps of IAM roles and security group.
EKS cluster recovery
- Cluster recovery:
To create a new cluster, user can refer to the following fields from output of the aws eks describe-cluster --name <Cluster name> command:
Name, Kubernetes version 3, Cluster service role, tags, VPC, Subnet, Security Group and Cluster endpoint access
- Nodegroup recovery:
To create a node group for a new cluster, user can refer to the following fields from output of the aws eks describe-nodegroup --nodegroup-name "<nodegroup-name>" --cluster-name <cluster-name> command:
Nodegroup name, Cluster name, Scaling config, Instance type, Node role, Disk size, Labels, Taints , Tags and Subnet
- File system:
To create new file system storage, user can refer to the following fields from the output of the aws efs describe-file-systems --file-system-id <EFS ID> and aws efs describe-mount-targets --file-system-id <EFS ID> commands:
Name, Virtual Private Cloud (VPC), Performance mode, Throughput mode , Provisioned Throughput (applicable only if Throughput mode is provisioned), Network access, Virtual Private Cloud (VPC) (Mount targets and AZ, SubnetID, IPAddr, SecurityGroup)
- Add-ons
Once cluster is up and running, user must install add-ons listed by using the following command:
aws eks list-addons --cluster-name <cluster-name>
In addition to listed add-on, user must install AWS load balancer controller add-on and Amazon EFS CSI driver.
Note:
If deploying in different availability zone, then choose subnet from that availability zone.
Note:
If there is an availability zone failure or corruption in the AKS/EKS cluster and not able to recover the AKS/EKS cluster, then perform the procedure in the following section to recover Cloud Scale:
More Information