Important Update: Cohesity Products Documentation
All Cohesity product documentation are now managed via the Cohesity Docs Portal: https://docs.cohesity.com/HomePage/Content/home.htm. Some documentation available here may not reflect the latest information or may no longer be accessible.
NetBackup™ for Kubernetes Administrator's Guide
- Overview of NetBackup for Kubernetes
- Deploying and configuring the NetBackup Kubernetes operator
- Prerequisites for NetBackup Kubernetes Operator deployment
- Deploy service package on NetBackup Kubernetes operator
- Port requirements for Kubernetes workload
- Upgrade the NetBackup Kubernetes operator
- Delete the NetBackup Kubernetes operator
- Configure NetBackup Kubernetes data mover
- Automated configuration of NetBackup protection for Kubernetes
- Customize Kubernetes workload
- Troubleshooting NetBackup servers with short names
- Data mover pod schedule mechanism support
- Validating accelerator storage class
- Deploying certificates on NetBackup Kubernetes operator
- Managing Kubernetes assets
- Managing Kubernetes intelligent groups
- Managing Kubernetes policies
- Protecting Kubernetes assets
- Managing image groups
- Protecting Rancher managed clusters in NetBackup
- Recovering Kubernetes assets
- About incremental backup and restore
- Enabling accelerator based backup
- Enabling FIPS mode in Kubernetes
- About Openshift Virtualization support
- Troubleshooting Kubernetes issues
- Error during the primary server upgrade: NBCheck fails
- Error during an old image restore: Operation fails
- Error during persistent volume recovery API
- Error during restore: Final job status shows partial failure
- Error during restore on the same namespace
- Datamover pods exceed the Kubernetes resource limit
- Error during restore: Job fails on the highly loaded cluster
- Custom Kubernetes role created for specific clusters cannot view the jobs
- Openshift creates blank non-selected PVCs while restoring applications installed from OperatorHub
- NetBackup Kubernetes operator become unresponsive if PID limit exceeds on the Kubernetes node
- Failure during edit cluster in NetBackup Kubernetes 10.1
- Backup or restore fails for large sized PVC
- Restore of namespace file mode PVCs to different file system partially fails
- Restore from backup copy fails with image inconsistency error
- Connectivity checks between NetBackup primary, media, and Kubernetes servers.
- Error during accelerator backup when there is no space available for track log
- Error during accelerator backup due to track log PVC creation failure
- Error during accelerator backup due to invalid accelerator storage class
- Error occurred during track log pod start
- Failed to setup the data mover instance for track log PVC operation
- Error to read track log storage class from configmap
- Restore operation fails when restoring single VM to a different cluster
- Auto-deployment of the Kubernetes workload fails during backupservercert creation
- Auto-deployment of the Kubernetes workload fails with time-out error
- Fail Fast restore strategy fails with error 2890 K8s restore data operation failed.
Application consistent virtual machines backup
You must annotate the virt-launcher pod with the NetBackup pre and post hooks if it is responsible for spawning virtual machines (VMs), as VMs cannot be created without it.
Commands to freeze and unfreeze the virtual machines:
/usr/bin/virt-freezer --freeze --name <vm-name> --namespace <namespace>
/usr/bin/virt-freezer --unfreeze --name <vm-name> --namespace <namespace>
# kubectl annotate pod -l vm.kubevirt.io/name=<vm-name> -n <vm-namespace> netbackup-pre.hook.backup.velero.io/command='["/usr/bin/virt-freezer", "--freeze", "--name", "<vm-name>", "--namespace", "<vm-namespace>"]'
netbackup-pre.hook.backup.velero.io/container=compute
netbackup-post.hook.backup.velero.io/command='
["/usr/bin/virt-freezer", "--unfreeze", "--name",
"<vm-name>", "--namespace", "<vm-namespace>"]'
netbackup-post.hook.backup.velero.io/container=compute
In NetBackup, when performing a Restore Virtual Machine operation, Velero pre- and post-restore hooks are not executed for KubeVirt-based virtual machines. This limitation arises because KubeVirt dynamically generates launcher pods for virtual machines, and their creation process is decoupled from Velero's restore workflow. As a result, Velero is unable to associate or apply its hooks to these dynamically created pods.
To execute post-restore actions within a KubeVirt virtual machine, users can utilize the cloudInitNoCloud mechanism to inject and run scripts directly inside the guest operating system after the VM is restored.
Warning:
Even though NetBackup supports Velero restore hooks, NetBackup has limitations while using it for KubeVirt based VMs
Note:
In order to achieve app consistency, it is necessary to install qemu-guest-agent on virtual machines to implement kubevirt-specific pre-exec and post-exec rules.
For more details, about configuration of NetBackup pre and post hook, see https://www.veritas.com/
Post restore hook example:
InitContainer Restore Hooks annotations:
init.hook.restore.velero.io/container-image
init.hook.restore.velero.io/container-name
init.hook.restore.velero.io/command
InitContainer Restore Hooks As Pod Annotation Example:
$ kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
init.hook.restore.velero.io/container-name=restore-hook \
init.hook.restore.velero.io/container-image=alpine:latest \
init.hook.restore.velero.io/command='["/bin/ash", "-c", "date"]'
Exec Restore Hooks annotations:
post.hook.restore.velero.io/container
post.hook.restore.velero.io/command
post.hook.restore.velero.io/on-error
post.hook.restore.velero.io/exec-timeout
post.hook.restore.velero.io/wait-timeout
Exec Restore Hooks As Pod Annotation Example:
$ kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
post.hook.restore.velero.io/container=postgres \
post.hook.restore.velero.io/command='["/bin/bash", "-c", "psql < /backup/backup.sql"]' \
post.hook.restore.velero.io/wait-timeout=5m \
post.hook.restore.velero.io/exec-timeout=45s \
post.hook.restore.velero.io/on-error=Continue