NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Data-Migration for AKS
This section describes the working, execution and status details of the Data-Migration for AKS.
Data migration job kicks-in whenever there is any change in the storage class name of the primary server's catalog, log and data volumes.
Migration job is used to perform data transfer of Primary server's file system data from
Azure diskstoAzure premium filesfor existing NetBackup deployments.If user is deploying NetBackup for the first time, then it is considered as fresh installation and the user can directly utilize the
Azure premium filesfor Primary server's catalog volume. Primary server log and data volume supports azure disks only.For existing NetBackup deployments, migration job would copy Primary server's old
Azure diskcatalog volume to new azure file volumes, except nbdb data, nbdb data will be copied to new azure disks based data volume. Logs can be migrated to new azure disk log volume.To invoke the migration job, the
Azure premium filesstorage class must be provided in theenvironment.yamlfile for catalog volume. User can also provide new azure disks storage class for log volume and new azure disk based data volume must be provided inenvironment.yaml.The migration status is updated to Success in primary server CRD post successful data migration.
Note:
Migration will take longer time based on catalog data size.
Data migration is carried out in form of in NetBackup Kubernetes cluster for only the primary server CR. There will be a migration job per primary volume for data migration which will be part of NetBackup environment namespace. Each job creates a pod in the cluster.
Execution summary of the Data migration can be retrieved from the migration pod logs using the following command:
kubectl logs <migration-pod-name> -n <netbackup-environment-namespace>
This summary can also be retrieved from the operator pod logs using the following command:
kubectl logs <netbackup-operator-pod-name> -n <netbackup-environment-namespace>
Status of the data migration can be retrieved from the primary server CR by using the following command:
kubectl describe <PrimaryServer> <CR name> -n <netbackup-environment-namespace>
Following are the data migration statuses:
Success: Indicates all necessary conditions for the migration of the Primary server are passed.
Failed: Indicates some or all necessary conditions for the migration the Primary server are failed.
Running: Indicates migration is in running state for the Primary server.
If the Data migration execution status is failed, you can check the migration job logs using the following command:
kubectl logs <migration-pod-name> -n <netbackup-environment-namespace>
Review the error codes and error messages pertaining to the failure and update the primary server CR with the correct configuration details to resolve the errors.
For more information about the error codes, refer to NetBackup™ Status Codes Reference Guide.