NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Preparing the environment for NetBackup installation on Kubernetes cluster
- Prerequisites for Snapshot Manager (AKS/EKS)
- Prerequisites for Kubernetes cluster configuration
- Prerequisites for Cloud Scale configuration
- Prerequisites for deploying environment operators
- Prerequisites for using private registry
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Tuning touch files
- Setting maximum jobs per client
- Setting maximum jobs per media server
- Enabling intelligent catalog archiving
- Enabling security settings
- Configuring email server
- Reducing catalog storage management
- Configuring zone redundancy
- Enabling client-side deduplication capabilities
- Parameters for logging (fluentbit)
- Managing media server configurations in Web UI
- Prerequisites
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing logging
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving issues when media server PVs are deleted
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Request router logs
- Issues with NBPEM/NBJM
- Issues with logging feature for Cloud Scale
- The flexsnap-listener pod is unable to communicate with RabbitMQ
- Job remains in queue for long time
- Extracting logs if the nbwsapp or log-viewer pods are down
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting issue for bootstrapper pod
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- About MSDP Scaleout
- Prerequisites for MSDP Scaleout (AKS\EKS)
- Limitations in MSDP Scaleout
- MSDP Scaleout configuration
- Installing the docker images and binaries for MSDP Scaleout (without environment operators or Helm charts)
- Deploying MSDP Scaleout
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Parameters for logging (fluentbit)
The logging feature is introduced in NetBackup 10.5 to to consolidate log files that have been distributed during the process of running NetBackup in a scale-out environment. To configure the fluentbit, ensure that the following prerequisites are met:
Ensure that the following system requirements and prerequisites are met before proceeding with the deployment:
System requirements: Cloud Scale deployment environment only
Application requirements: Cloud Scale fluentbit pods and containers:
Fluentbit collector pod and containers
Fluentbit DaemonSet sender pods and containers
Fluentbit sidecar sender containers
To view the current values, execute the command:
helm get values fluentbit -n <netbackup namespace>
To apply the new values of fluentbit, execute the command:
helm upgrade --install fluentbit <fluentbit .tgz> -f new-values.yaml -n <netbackup namespace>
Table: Fluentbit collector configuration variables
Configuration value | Description |
|---|---|
fluentbit.volume.pvcStorage | This is the size of the PVC created for the fluentbit collector to store logs. |
collectorNodeSelector | This is how you can set the node selector for the fluentbit collector pod. |
Table: Log Cleanup configuration variables
Configuration value | Description |
|---|---|
fluentbit.volume.pvcStorage | This parameter describes the size of the PVC created for the fluentbit collector to store logs. |
fluentbit.cleanup.retentionDays | (number of days) - Number of days to retain logs before cleaning them up. For example1 would mean 1 day which would mean that if the logs were created the day before they are to be kept that 1 day and no cleanup would occur. The next day the logs are 2 days old and with a setting of 1 they would be cleaned up. Default: retentionDays: 7 |
fluentbit.cleanup.retentionCleanupTime | (hh:mm) - Time of day to cleanup the logs. This is based on the local Kubernetes time zone. The retention cleanup occurs once daily. Default: retentionCleanupTime: 04:00 |
fluentbit.cleanup.utilizationCleanupFrequency | (number of minutes) - Number of minutes to wait between subsequent utilization cleanups. This is based on start time of the previous cleanup, not when the cleanup finishes. Default: utilizationCleanupFrequency: 60 |
fluentbit.cleanup.highWatermark | (number percent) - This is the percentage of storage utilization that the utilization cleanup will start cleaning up data one day at a time. It will continue until it reaches the low watermark or only the current days logs remain. If only current days logs remain it will start cleaning up individual files of the current day based on last updated time. Default: highWatermark: 90 |
fluentbit.cleanup.lowWatermark | (number percent) - This is the percentage of storage utilization that the utilization cleanup will stop cleaning up data once it gets below. Default: lowWatermark: 60 |
fluentbit.namespaces | This is a list of namespaces to collect stdout logs from. This is designed to allow you to filter out cluster logs and other logs not related to the NetBackup system in order to reduce network traffic and focus on relevant logs. Default: netbackup, netbackup-operator-system |
Table: Fluentbit DaemonSet configuration variables
Configuration value | Description |
|---|---|
tolerations | This is where you can set the tolerations of the daemonset pods to help determine which nodes they should be scheduled to. |
Table: Resizing Log Volumes
Configuration value | Description |
|---|---|
Resize Log Volumes: It is important that all the non-collector log volumes must remain the same size. This is because they are all using a common variable in the bp.conf file to determine when to be cleaned up and how much size to use. |
|