NetBackup™ Web UI Kubernetes Administrator's Guide
- Introducing the NetBackup web user interface
- Monitoring NetBackup
- Overview of NetBackup for Kubernetes
- Deploying and configuring the NetBackup Kubernetes operator
- Managing Kubernetes assets
- Protecting Kubernetes assets
- Recovering Kubernetes assets
- Troubleshooting Kubernetes issues
Configuring clusters for Kubernetes
You need to configure the clusters before you can deploy the NetBackup™ Kubernetes operator. You can deploy the NetBackup Kubernetes operator using Helm charts in three different platforms:
Red Hat OpenShift
Google Kubernetes Engine (GKE)
VMware Tanzu
Before you begin make sure that you have the required privileges in your OpenShift account to perform these operations.
To configure OpenShift:
Log on to the OpenShift OC using the CLI, using the following command:
oc login --token=<TOKEN> --server=<URL>
Where:
<TOKEN>: Is your logon token
<URL>: Is your OpenShift server URL
Note:
You can get the Token and the URL by logging on to your OpenShift account. Click the name of your OpenShift admin account by which you logged into the console, on top right of the home page, and then click the Copy Login Command option. On the new page that opens, click Display Token, to see the command.
This command adds a new kubectl context in the ~/.kube/config
file, and sets this new context as kubectl current context.
Before you begin make sure that you have the required privileges in your GKE account to perform these operations
Prerequisites:
The port number for GKE clusters can be 443, 6443, or 8443. Default port is 443. Verify the correct secure port number before adding.
When creating persistent volumes or persistent volume claims on GKE, specify a storage class whose
Provisioner
iskubernetes.io/gce-pd
.
To log on using an existing account:
- Use the following command to logon to the GKE account by using an existing user account:
gcloud auth login <account>
- Enter the logon credentials interactively or non-interactively.
- To list all the clusters and find the cluster name, run the command:
gcloud container clusters list
The output looks like this:
NAME LOCATION MASTER_VERSION MASTER_IP csi-cluster us-central1-c 1.17.14-gke.400 35.238.135.170 sailor us-central1-c 1.16.15-gke.6000 35.224.28.128 surens-cluster us-east1-b 1.17.14-gke.1600 35.231.17.183 bw-kube-cluster-1 us-east1-c 1.16.15-gke.6000 35.196.24.132
- To get credentials for the cluster and add it to
.kube/config
, run this command:gcloud container clusters get-credentials <cluster name>
For example: gcloud container clusters get-credentials bw-kube-cluster-1
Alternatively, you can create and use a dedicated service account for your cluster to logon.
To create a dedicated service account:
- To create an account, run this command:
gcloud iam service-accounts create <account name> --display-name "<account description>"
For example: gcloud iam service-accounts create veritas-netbackup-k8s-sa --display-name "Veritas NetBackup K8s Service Account"
- To list the users, run this command:
gcloud iam service-accounts list --filter <email ID>@<project ID>.gserviceaccount.com
For example: gcloud iam service-accounts list --filter veritas-netbackup-k8s-sa@projectID.gserviceaccount.com
- To download the service account key, run this command:
gcloud iam service-accounts keys create <key json file name> --iam-account <e-mail address of the service account>
For example: gcloud iam service-accounts keys create veritas-netbackup-k8s-sa-key.json --iam-account <e-mail ID of the service account>
- To associate a role, run this command:
gcloud iam roles create <role name> --project <project ID> --file ./<role name>.yaml
For Example: gcloud iam roles create rolename --project projectID --file ./rolename.yaml
- To activate the service account, run this command:
gcloud auth activate-service-account --project=<project ID> --key-file=<key file name>
For example: gcloud auth activate-service-account --project=<YOUR PROJECT ID> --key-file=veritas-netbackup-k8s-sa-key.json
- To list all the clusters and find the cluster name, run the command:
gcloud container clusters list
The output looks like this:
NAME LOCATION MASTER_VERSION MASTER_IP csi-cluster us-central1-c 1.17.14-gke.400 35.238.135.170 sailor us-central1-c 1.16.15-gke.6000 35.224.28.128 surens-cluster us-east1-b 1.17.14-gke.1600 35.231.17.183 bw-kube-cluster-1 us-east1-c 1.16.15-gke.6000 35.196.24.132
- To get credentials for the cluster and add it to
.kube/config
, run this command:gcloud container clusters get-credentials <cluster name>
For example: gcloud container clusters get-credentials bw-kube-cluster-1
Before you begin make sure that you have the required privileges in your Tanzu account to perform these operations. Make sure that you have the TKG client installed.
Add an existing Tanzu management cluster to your local TKG instance:
- Copy the
kube-tkg/config
file from the management cluster to local user home directory:~/
- Run the command: chmod 775 ~/.kube-tkg/config
- Run the command: export KUBECONFIG=.kube-tkg/config
- To get a list of the contexts, run the command: tkg get mc. The output looks like this:
MANAGEMENT-CLUSTER-NAME CONTEXT-NAME STATUS tkg-mgmt * tkg-mgmt-admin@tkg-mgmt Success tkg1-mgmt tkg1-mgmt-admin@tkg1-mgmt Success tkg2-mgmt tkg2-mgmt-admin@tkg2-mgmt Success
- To switch to the TKG context, run the command: tkg set mc tkg1-mgmt
The current management cluster context is switched to tkg1-mgmt.
- To check the kubectl context, run the command: kubectl config get-contexts. The output looks like this:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE tkg1-mgmt-admin@tkg1-mgmt tkg1-mgmt tkg1-mgmt-admin tkg2-mgmt-admin@tkg2-mgmt tkg2-mgmt tkg2-mgmt-admin
- To check the management clusters in the local TKG instance, run the command: tkg get mc. The output looks like this:
[dxxxx@xxxxxxxxx01vm1392 ~]$ tkg get mc MANAGEMENT-CLUSTER-NAME CONTEXT-NAME STATUS tkg-mgmt tkg-mgmt-admin@tkg-mgmt Success tkg1-mgmt * tkg1-mgmt-admin@tkg1-mgmt Success tkg2-mgmt tkg2-mgmt-admin@tkg2-mgmt Success
- To get all the clusters in your current context, run the command: tkg get clusters. The output looks like this:
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES tkg1-cluster1 default running 3/3 3/3 v1.19.3+vmware.1 tkg1-cluster2 default running 3/3 3/3 v1.19.3+vmware.1 tkg1-cluster3 default running 3/3 3/3 v1.19.3+vmware.1
- To add credentials to
kubectl
configuration file, run the command: tkg get credentials tkg1-cluster1This saves the credentials of the workload cluster tkg1-cluster1 in the configuration file. To access the cluster, run the command: kubectl config use-context tkg1-cluster1-admin@tkg1-cluster1
- To switch to the kubectl context, run the command: kubectl config use-context tkg1-cluster1-admin@tkg1-cluster1
- To check the kubectl context, run the command: kubectl config get-contexts
The output looks like this:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE tkg1-cluster1-admin@tkg1-cluster1 tkg1-cluster1 tkg1-cluster1-admin tkg1-mgmt-admin@tkg1-mgmt tkg1-mgmt tkg1-mgmt-admin tkg2-mgmt-admin@tkg2-mgmt tkg2-mgmt tkg2-mgmt-admin
Now you can use any kubectl commands in the tkg1-cluster1.