NetBackup™ Web UI Kubernetes Administrator's Guide

Last Published:
Product(s): NetBackup (9.1)

Configuring clusters for Kubernetes

You need to configure the clusters before you can deploy the NetBackup™ Kubernetes operator. You can deploy the NetBackup Kubernetes operator using Helm charts in three different platforms:

  • Red Hat OpenShift

  • Google Kubernetes Engine (GKE)

  • VMware Tanzu

Configuring OpenShift for NetBackup

Before you begin make sure that you have the required privileges in your OpenShift account to perform these operations.

To configure OpenShift:

  • Log on to the OpenShift OC using the CLI, using the following command:

    oc login --token=<TOKEN> --server=<URL>

    Where:

    • <TOKEN>: Is your logon token

    • <URL>: Is your OpenShift server URL

Note:

You can get the Token and the URL by logging on to your OpenShift account. Click the name of your OpenShift admin account by which you logged into the console, on top right of the home page, and then click the Copy Login Command option. On the new page that opens, click Display Token, to see the command.

This command adds a new kubectl context in the ~/.kube/config file, and sets this new context as kubectl current context.

Configuring GKE for NetBackup

Before you begin make sure that you have the required privileges in your GKE account to perform these operations

Prerequisites:

  • The port number for GKE clusters can be 443, 6443, or 8443. Default port is 443. Verify the correct secure port number before adding.

  • When creating persistent volumes or persistent volume claims on GKE, specify a storage class whose Provisioner is kubernetes.io/gce-pd.

To log on using an existing account:

  1. Use the following command to logon to the GKE account by using an existing user account:

    gcloud auth login <account>

  2. Enter the logon credentials interactively or non-interactively.
  3. To list all the clusters and find the cluster name, run the command:

    gcloud container clusters list

    The output looks like this:

    NAME               LOCATION       MASTER_VERSION    MASTER_IP       
    csi-cluster        us-central1-c  1.17.14-gke.400   35.238.135.170       
    sailor             us-central1-c  1.16.15-gke.6000  35.224.28.128         
    surens-cluster     us-east1-b     1.17.14-gke.1600  35.231.17.183       
    bw-kube-cluster-1  us-east1-c     1.16.15-gke.6000  35.196.24.132
  4. To get credentials for the cluster and add it to .kube/config, run this command:

    gcloud container clusters get-credentials <cluster name>

    For example: gcloud container clusters get-credentials bw-kube-cluster-1

Alternatively, you can create and use a dedicated service account for your cluster to logon.

To create a dedicated service account:

  1. To create an account, run this command:

    gcloud iam service-accounts create <account name> --display-name "<account description>"

    For example: gcloud iam service-accounts create veritas-netbackup-k8s-sa --display-name "Veritas NetBackup K8s Service Account"

  2. To list the users, run this command:

    gcloud iam service-accounts list --filter <email ID>@<project ID>.gserviceaccount.com

    For example: gcloud iam service-accounts list --filter veritas-netbackup-k8s-sa@projectID.gserviceaccount.com

  3. To download the service account key, run this command:

    gcloud iam service-accounts keys create <key json file name> --iam-account <e-mail address of the service account>

    For example: gcloud iam service-accounts keys create veritas-netbackup-k8s-sa-key.json --iam-account <e-mail ID of the service account>

  4. To associate a role, run this command:

    gcloud iam roles create <role name> --project <project ID> --file ./<role name>.yaml

    For Example: gcloud iam roles create rolename --project projectID --file ./rolename.yaml

  5. To activate the service account, run this command:

    gcloud auth activate-service-account --project=<project ID> --key-file=<key file name>

    For example: gcloud auth activate-service-account --project=<YOUR PROJECT ID> --key-file=veritas-netbackup-k8s-sa-key.json

  6. To list all the clusters and find the cluster name, run the command:

    gcloud container clusters list

    The output looks like this:

    NAME               LOCATION       MASTER_VERSION    MASTER_IP       
    csi-cluster        us-central1-c  1.17.14-gke.400   35.238.135.170       
    sailor             us-central1-c  1.16.15-gke.6000  35.224.28.128         
    surens-cluster     us-east1-b     1.17.14-gke.1600  35.231.17.183       
    bw-kube-cluster-1  us-east1-c     1.16.15-gke.6000  35.196.24.132  
  7. To get credentials for the cluster and add it to .kube/config, run this command:

    gcloud container clusters get-credentials <cluster name>

    For example: gcloud container clusters get-credentials bw-kube-cluster-1

Configuring VMware Tanzu for NetBackup

Before you begin make sure that you have the required privileges in your Tanzu account to perform these operations. Make sure that you have the TKG client installed.

Add an existing Tanzu management cluster to your local TKG instance:

  1. Copy the kube-tkg/config file from the management cluster to local user home directory: ~/
  2. Run the command: chmod 775 ~/.kube-tkg/config
  3. Run the command: export KUBECONFIG=.kube-tkg/config
  4. To get a list of the contexts, run the command: tkg get mc. The output looks like this:
     
      MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME               STATUS
      tkg-mgmt *               tkg-mgmt-admin@tkg-mgmt    Success
      tkg1-mgmt                tkg1-mgmt-admin@tkg1-mgmt  Success
      tkg2-mgmt                tkg2-mgmt-admin@tkg2-mgmt  Success
  5. To switch to the TKG context, run the command: tkg set mc tkg1-mgmt

    The current management cluster context is switched to tkg1-mgmt.

  6. To check the kubectl context, run the command: kubectl config get-contexts. The output looks like this:
     CURRENT NAME            CLUSTER AUTHINFO   NAMESPACE
    tkg1-mgmt-admin@tkg1-mgmt      tkg1-mgmt    tkg1-mgmt-admin
    tkg2-mgmt-admin@tkg2-mgmt      tkg2-mgmt    tkg2-mgmt-admin
  7. To check the management clusters in the local TKG instance, run the command: tkg get mc. The output looks like this:
     [dxxxx@xxxxxxxxx01vm1392 ~]$ tkg get mc
      MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME               STATUS
      tkg-mgmt                tkg-mgmt-admin@tkg-mgmt    Success
      tkg1-mgmt *                tkg1-mgmt-admin@tkg1-mgmt  Success
      tkg2-mgmt                tkg2-mgmt-admin@tkg2-mgmt  Success
  8. To get all the clusters in your current context, run the command: tkg get clusters. The output looks like this:
     NAME         NAMESPACE  STATUS  CONTROLPLANE  WORKERS  KUBERNETES 
      tkg1-cluster1  default  running  3/3        3/3   v1.19.3+vmware.1  
      tkg1-cluster2  default  running  3/3        3/3   v1.19.3+vmware.1  
      tkg1-cluster3  default  running  3/3        3/3   v1.19.3+vmware.1  
  9. To add credentials to kubectl configuration file, run the command: tkg get credentials tkg1-cluster1

    This saves the credentials of the workload cluster tkg1-cluster1 in the configuration file. To access the cluster, run the command: kubectl config use-context tkg1-cluster1-admin@tkg1-cluster1

  10. To switch to the kubectl context, run the command: kubectl config use-context tkg1-cluster1-admin@tkg1-cluster1
  11. To check the kubectl context, run the command: kubectl config get-contexts

    The output looks like this:

     CURRENT NAME             CLUSTER AUTHINFO         NAMESPACE
    tkg1-cluster1-admin@tkg1-cluster1   tkg1-cluster1   tkg1-cluster1-admin
    tkg1-mgmt-admin@tkg1-mgmt           tkg1-mgmt       tkg1-mgmt-admin
    tkg2-mgmt-admin@tkg2-mgmt           tkg2-mgmt       tkg2-mgmt-admin

    Now you can use any kubectl commands in the tkg1-cluster1.