NetBackup™ for Kubernetes Administrator's Guide
- Overview of NetBackup for Kubernetes
- Deploying and configuring the NetBackup Kubernetes operator
- Prerequisites for NetBackup Kubernetes Operator deployment
- Deploy service package on NetBackup Kubernetes operator
- Port requirements for Kubernetes operator deployment
- Upgrade the NetBackup Kubernetes operator
- Delete the NetBackup Kubernetes operator
- Configure NetBackup Kubernetes data mover
- Automated configuration of NetBackup protection for Kubernetes
- Configure settings for NetBackup snapshot operation
- Troubleshooting NetBackup servers with short names
- Data mover pod schedule mechanism support
- Validating accelerator storage class
- Deploying certificates on NetBackup Kubernetes operator
- Managing Kubernetes assets
- Managing Kubernetes intelligent groups
- Managing Kubernetes policies
- Protecting Kubernetes assets
- Managing image groups
- Protecting Rancher managed clusters in NetBackup
- Recovering Kubernetes assets
- About incremental backup and restore
- Enabling accelerator based backup
- Enabling FIPS mode in Kubernetes
- About Openshift Virtualization support
- Troubleshooting Kubernetes issues
- Error during the primary server upgrade: NBCheck fails
- Error during an old image restore: Operation fails
- Error during persistent volume recovery API
- Error during restore: Final job status shows partial failure
- Error during restore on the same namespace
- Datamover pods exceed the Kubernetes resource limit
- Error during restore: Job fails on the highly loaded cluster
- Custom Kubernetes role created for specific clusters cannot view the jobs
- Openshift creates blank non-selected PVCs while restoring applications installed from OperatorHub
- NetBackup Kubernetes operator become unresponsive if PID limit exceeds on the Kubernetes node
- Failure during edit cluster in NetBackup Kubernetes 10.1
- Backup or restore fails for large sized PVC
- Restore of namespace file mode PVCs to different file system partially fails
- Restore from backup copy fails with image inconsistency error
- Connectivity checks between NetBackup primary, media, and Kubernetes servers.
- Error during accelerator backup when there is no space available for track log
- Error during accelerator backup due to track log PVC creation failure
- Error during accelerator backup due to invalid accelerator storage class
- Error occurred during track log pod start
- Failed to setup the data mover instance for track log PVC operation
- Error to read track log storage class from configmap
Data mover pod schedule mechanism support
Specify the following fields in the backup server ConfigMap to schedule data mover pods on the nodes.
nodeSelector: nodeSelector is the effortless way to constrain pods to the nodes with specific labels.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.nodeSelector: | kubernetes.io/hostname: test1-l94jm-worker-k49vj topology.rook.io/rack: rack1 version: "1"
nodeName: nodeName is a direct form of node selection than affinity or nodeSelector. It allows you to specify a node on which a pod is scheduled for backup, overriding the default schedule mechanism.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.nodeName : test1-l94jm-worker-hbblk version: "1"
Taint and Toleration: Toleration allows you to schedule the pods with similar taints. Taint and toleration work together to ensure that the pods are scheduled onto appropriate nodes. If one or more taints are applied to a node. Then that node must not accept any pods which does not tolerate the taints.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.tolerations: | - key: "dedicated" operator: "Equal" value: "experimental" effect: "NoSchedule" version: "1"
Affinity and Anti-affinity: Node affinity functions like the nodeSelector field but it is more expressive and allows you to specify soft rules. Inter-pod affinity/anti-affinity allows you to constrain pods against labels on the other pods.
Examples:
Node Affinity:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.affinity: | nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - test1-l94jm-worker-hbblk preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 version: "1"
Pod Affinity
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.affinity: | podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: component operator: In values: - netbackup topologyKey: kubernetes.io/hostname version: "1"
topologySpreadConstraints: Topology spread constraints are used to control the behavior of the pods that are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover. hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.topologySpreadConstraints : | - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule version: "1"
Labels: Labels are the key/value pairs attached to the objects, such as pods. Labels intends to identify the attributes of an object which are significant and relevant to users. Labels can organize and select subsets of objects. Labels which are attached to objects at creation time are subsequently added and modified at any time.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.labels: | env: test pod: datamover version: "1"
Annotations: User can use either labels or annotations to attach metadata to Kubernetes objects. You cannot use Annotations to identify and select objects.
Example:
apiVersion: v1 kind: ConfigMap metadata: name: backupserver.sample.domain.com namespace: netbackup data: datamover.hostaliases: | 10.20.12.13=backupserver.sample.domain.com 10.21.12.13=mediaserver.sample.domain.com datamover.properties: | image=reg.domain.com/datamover/image:latest datamover.annotations: | buildinfo: |- [{ "name": "test", "build": "1" }] imageregistry: "https://reg.domain.com/" version: "1"