NetBackup™ for Kubernetes Administrator's Guide
- Overview of NetBackup for Kubernetes
- Deploying and configuring the NetBackup Kubernetes operator
- Prerequisites for NetBackup Kubernetes Operator deployment
- Deploy service package on NetBackup Kubernetes operator
- Port requirements for Kubernetes operator deployment
- Upgrade the NetBackup Kubernetes operator
- Delete the NetBackup Kubernetes operator
- Configure NetBackup Kubernetes data mover
- Automated configuration of NetBackup protection for Kubernetes
- Customize Kubernetes workload
- Troubleshooting NetBackup servers with short names
- Data mover pod schedule mechanism support
- Validating accelerator storage class
- Deploying certificates on NetBackup Kubernetes operator
- Managing Kubernetes assets
- Managing Kubernetes intelligent groups
- Managing Kubernetes policies
- Protecting Kubernetes assets
- Managing image groups
- Protecting Rancher managed clusters in NetBackup
- Recovering Kubernetes assets
- About incremental backup and restore
- Enabling accelerator based backup
- Enabling FIPS mode in Kubernetes
- About Openshift Virtualization support
- Troubleshooting Kubernetes issues
- Error during the primary server upgrade: NBCheck fails
- Error during an old image restore: Operation fails
- Error during persistent volume recovery API
- Error during restore: Final job status shows partial failure
- Error during restore on the same namespace
- Datamover pods exceed the Kubernetes resource limit
- Error during restore: Job fails on the highly loaded cluster
- Custom Kubernetes role created for specific clusters cannot view the jobs
- Openshift creates blank non-selected PVCs while restoring applications installed from OperatorHub
- NetBackup Kubernetes operator become unresponsive if PID limit exceeds on the Kubernetes node
- Failure during edit cluster in NetBackup Kubernetes 10.1
- Backup or restore fails for large sized PVC
- Restore of namespace file mode PVCs to different file system partially fails
- Restore from backup copy fails with image inconsistency error
- Connectivity checks between NetBackup primary, media, and Kubernetes servers.
- Error during accelerator backup when there is no space available for track log
- Error during accelerator backup due to track log PVC creation failure
- Error during accelerator backup due to invalid accelerator storage class
- Error occurred during track log pod start
- Failed to setup the data mover instance for track log PVC operation
- Error to read track log storage class from configmap
Datamover pods exceed the Kubernetes resource limit
NetBackup controls the total number of in-progress backup jobs on Kubernetes workload using the two resource limit properties. In NetBackup version 10.0, datamover pods exceeds the and resource limits set for per Kubernetes cluster.
Scenario no 1
Resource limit for Backup from Snapshot jobs per Kubernetes cluster is set to 1.
Job IDs 3020 and 3021 are the parent jobs for Backup from snapshot. The creation of the data mover pod and its cleanup process are part of the backup job life cycle.
Job ID 3022 is the child job, where the data movement takes place from the cluster to the storage unit.
Based on the resource limit setting, while job ID 3022 is in the running state, job ID 3021 will continue to be in queued state. Once, the backup job ID 3022 is completed, then the parent Job ID 3021 will start.
Notice that the job ID 3020 is still in progress, since we are in process to clean up the data mover pod and complete the life cycle of the parent job ID 3020.
Scenario no 2
At this stage, we may encounter that there are 2 data mover pods running simultaneously in the NetBackup Kubernetes operator deployment namespace. Because the data mover pod created as part of job ID 3020 is still not cleaned up, but we started with creation of data mover pod for job 3021.
In a busy environment, where multiple Backup from Snapshot jobs are triggered, a low resource limit value setting may lead to backup jobs spending most of the time in the queued state.
But if we have a higher resource limit setting, we may observe that the data mover pods might exceed the count specified in the resource limit. This may lead to resource starvation in the Kubernetes cluster.
While the data movement job like 3022 runs in parallel, cleanup activities are handled sequentially. This when combined with the time it takes to cleanup the datamover resource, if closer to the time it takes to backup the pvc/namespace data leads to longer delay in the completion of the jobs.
If the combined time duration for data movement and clean up resources is like the backup job. Then, the backup job of persistent volume or namespace data may lead to delay in the job completion.
Recommended action: Ensure to review your system resources and performance, to set the resource limit value accordingly. This measure will help you achieve the best performance for all backup jobs.