Guide de l'administrateur NetBackup™ pour Kubernetes
- Présentation de NetBackup pour Kubernetes
- Déploiement et configuration de l'opérateur NetBackup Kubernetes
- Conditions préalables au déploiement de l'opérateur NetBackup Kubernetes
- Déploiement du package de service sur l'opérateur NetBackup Kubernetes
- Spécifications de port pour le déploiement de l'opérateur Kubernetes
- Mise à niveau de l'opérateur NetBackup Kubernetes
- Suppression de l'opérateur NetBackup Kubernetes
- Configuration du système de déplacement des données NetBackup Kubernetes
- Automated configuration of NetBackup protection for Kubernetes
- Personnaliser la charge de travail Kubernetes
- Dépannage des serveurs NetBackup avec des noms courts
- Data mover pod schedule mechanism support
- Validation de la classe de stockage d'accélérateur
- Déploiement de certificats sur l'opérateur NetBackup Kubernetes
- Gestion des biens Kubernetes
- Gestion des groupes intelligents Kubernetes
- Gestion des politiques Kubernetes
- Protection des biens Kubernetes
- Protection d'un groupe intelligent
- Suppression de la protection d'un groupe intelligent
- Configuration d'une planification de sauvegarde
- Configuration des options de sauvegardes
- Configuration des sauvegardes
- Configuration d'AIR (Auto Image Replication) et de la duplication
- Configuration des unités de stockage
- Prise en charge du mode volume
- Configure application consistent backup
- Gestion des groupes d'images
- Protection des clusters gérés par Rancher dans NetBackup
- Récupération des biens Kubernetes
- À propos de la sauvegarde incrémentielle et de la restauration
- Activation de la sauvegarde basée sur l'accélérateur
- À propos de la prise en charge de l'accélérateur NetBackup pour les charges de travail Kubernetes
- Contrôle de l'espace disque réservé aux journaux de suivi sur le serveur principal
- Impact du comportement de la classe de stockage sur l'accélérateur
- À propos des nouvelles analyses forcées par l'accélérateur
- Avertissements et raison probable des échecs des sauvegardes avec accélérateur
- Activation du mode FIPS dans Kubernetes
- À propos de la prise en charge de la virtualisation Openshift
- Résolution des problèmes liés à Kubernetes
- Erreur lors de la mise à niveau du serveur principal : échec de NBCheck
- Erreur lors de la restauration d'une image ancienne : l'opération échoue
- Erreur de l'API de récupération de volume persistant
- Erreur lors de la restauration : l'état final du travail affiche un échec partiel
- Erreur lors de la restauration sur le même espace de noms
- Pods du datamover dépassant la limite de ressource Kubernetes
- Erreur lors de la restauration : le travail échoue sur le cluster hautement chargé
- Le rôle Kubernetes personnalisé créé pour des clusters spécifiques ne peut pas afficher les travaux
- Openshift crée des PVC vides non sélectionnés lors de la restauration des applications installées à partir d'OperatorHub
- L'opérateur NetBackup Kubernetes ne répond plus si la limite de PID est dépassée sur le nœud Kubernetes
- Échec lors de la modification du cluster dans NetBackup Kubernetes 10.1
- La sauvegarde ou la restauration échoue pour une demande PVC volumineuse
- Échec partiel de la restauration des PVC de mode fichier de l'espace de noms sur un système de fichiers différent
- Échec de la restauration à partir de la copie de sauvegarde avec une erreur d'incohérence d'image
- Vérifications de connectivité entre les serveurs principal/de médias NetBackup et les serveurs Kubernetes
- Erreur lors de la sauvegarde avec accélérateur lorsque l'espace disponible pour le journal de suivi est insuffisant
- Erreur lors de la sauvegarde avec accélérateur en raison de l'échec de la création de la demande PVC pour le journal de suivi
- Erreur lors de la sauvegarde avec accélérateur en raison d'une classe de stockage d'accélérateur non valide
- Une erreur se produit lors du démarrage du pod de journal de suivi
- Échec de la configuration de l'instance de système de déplacement des données pour l'opération de création de demande PVC pour le journal de suivi
- Erreur lors de la lecture de la classe de stockage du journal de suivi à partir du fichier configmap
Configure application consistent backup
Some pods that are running applications, such as databases, which require additional procedures to obtain application consistent backups.
Application consistent backups require a mechanism to understand the application metadata, its state in memory, and the persistent data that resides on the persistent storage. To achieve a healthy state during restore, an application consistent backup across all these Kubernetes resources helps streamline the recovery process. These procedures are not required if only a crash consistent backup is required.
The application has vendor-documented steps to pause Input and Output (I/O) operations to perform an application consistent snapshot. This varies from one application to another, so the custom nature of these procedures is important. The content of these procedures is the customer's responsibility.
For protecting Kubernetes workloads with NetBackup, the method to achieve application consistent snapshots is to apply application pod annotations that leverage backup hooks. Kubernetes annotations are simply metadata which can be applied to any Kubernetes resources. Hooks within Kubernetes are user-defined actions and can be any command or multiple commands. Within your Kubernetes infrastructure, apply these annotations and hooks to any application pod that requires a quiesce state.
Backup hooks are used for both pre (before the snapshot) and post (after the snapshot) processing. In the context of data protection, this usually means that a netbackup-pre-backup hook calls a quiesce procedure or command, and the netbackup-post-backup hook calls an un-quiesce procedure or command. Each set of hooks specifies the command, as well as the container where it is applied. Note that the commands are not executed within a shell on the containers. Thus, a full command string with the directory is used in the given examples.
Identify the applications that require application consistent backups and apply the annotation with a set of backup hooks as part of the configuration for Kubernetes data protection.
Add an annotation to a pod, use the Kubernetes User Interface (UI). Alternatively, use the kubectl annotate function on the Kubernetes cluster console for a specific pod or label. The methods to apply annotations may vary depending on the distribution, therefore the following examples focuses on the kubectl command, based on its wide availability in most distributions.
Additionally, annotations can be added to the base Kubernetes objects, such as the deployment or replica set resources to ensure the annotations are included in any newly deployed pods. The Kubernetes administrator can update annotations dynamically.
Labels are key-value pairs which are attached to the Kubernetes objects such as Pods, or Services. Labels are used as attributes for objects that are meaningful and relevant to the user. Labels can be attached to objects at creation time and subsequently added and modified at any time. Kubernetes offers integrated support for using these labels to query objects and perform bulk operations on selected subsets. Each object can have a set of key-value labels defined. Each Key must be unique for a given object.
As an example of formatting and syntax of the label metadata:
"metadata": {"labels": {"key1":"value1","key2":"value2"}}
Either specify the pod name specifically, or a label that applies to the desired group of pods. If multiple annotation arguments are used, then specify the correct JSON format, such as a JSON array: '["item1","item2","itemn"]'# kubectl annotate pod [ {pod_name} | -l {label=value}] -n {the-pods-namespace_name} [annotation syntax - see following]
This method can be combined with && to join multiple commands if some applications require multiple commands to achieve the desired result. The commands specified are not provided by Cohesity, and the user must manually customize the application pod. Replace {values} with the actual names used in your environment.
Remarque :
Allkubectl commands must be defined in a single line. Be careful when you copy or paste the following examples.
After upgrading to NetBackup 10.2, update the annotations to these new netbackup-pre and netbackup-post backup hooks that now include the "netbackup" prefix:
netbackup-pre.hook.back.velero.io/command netbackup-pre.hook.backup.velero.io/container netbackup-post.hook.back.velero.io/command netbackup-post.hook.backup.velero.io/container
Following are the commands to lock and unlock a MongoDB 4.2.23 database:
# mongo --eval "db. fsyncLock ()"
# mongo --eval "db.fsyncUnlock()"
This translates into the following single command to set both the pre and post backup hooks for MongoDB. Note the special syntax to escape special characters as well the brackets ([]), single and double quotes and commas (,) used as part of the JSON format:
# kubectl annotate pod {mongodb-pod-name} -n {mongodb namespace} netbackup-pre.hook.back.velero.io/command='["/bin/bash", "-c", "mongo --eval \"db.fsyncLock()\""]' netbackup-pre.hook.backup.velero.io/container={mongodb-pod-name} netbackup-post.hook.backup.velero.io/command='["/bin/bash","-c","mongo --eval \"db.fsyncUnlock()\""]' netbackup-post.hook.backup.velero.io/container={mongodb-pod-name}
Following are the commands to quiesce and un-quiesce the MySQL database:
# mysql -uroot -ppassword -e "flush tables with read lock"
# mysql -uroot -ppassword -e "unlock tables"
This translates into the following single command to set both the pre and post backup hooks for MySQL. In this example, we used a label instead of a pod name, so the label can annotate multiple pods at once. Note the special syntax to escape special characters as well the brackets ([]), single and double quotes and commas (,) used as part of the JSON format:
# kubectl annotate pod -l label=value -n {mysql namespace} netbackup-pre.hook.backup.velero.io/command='["/bin/bash", "-c", "mysql -uroot -ppassword -e \"flush tables with read lock\""]' netbackup-pre.hook.backup.velero.io/container={mysql container name} netbackup-post.hook.backup.velero.io/command='["/bin/bash", "-c", "mysql -uroot -ppassword -e \"unlock tables\""]' netbackup-post.hook.backup.velero.io/container={mysql container name}
Following are the commands to quiesce and un-quiesce the PostgreSQL database:
# Psql -U postgres -c "SELECT pg_start_backup('tagvalue');"
# psql -U postgres -c \"SELECT pg_stop_backup();"
This translates into the following single command to set both the pre and post backup hooks for Postgres. In this example, we used a label instead of a pod name, so the label can annotate multiple matching pods at once. Labels can be applied to any Kubernetes object, and in this case, we are using them to provide another way to modify a specific container and select only certain pods. Note the special syntax to escape special characters as well the brackets ([]), single and double quotes and commas (,) used as part of the JSON format:
# kubectl annotate pod -l app=app-postgresql -n {postgres namespace} netbackup-pre.hook.backup.velero.io/command='["/bin/bash", "-c", "psql -U postgres -c \"SELECT pg_start_backup(quote_literal($EPOCHSECONDS));\""]' netbackup-pre.hook.backup.velero.io/container={postgres container name} netbackup-post.hook.backup.velero.io/command='["/bin/bash", "-c", "psql -U postgres -c \"SELECT pg_stop_backup();\""]' netbackup-post.hook.backup.velero.io/container={postgres container name}
Following are the commands to quiesce and un-quiesce the Nginx application:
# /sbin/fsfreeze --freeze /var/log/nginx
# /sbin/fsfreeze --unfreeze /var/log/nginx
This translates into the following single command to set both the pre and post backup hooks for NGINX. In this example, we will omit the container hooks, and this will modify the first container that matches the pod name by default. Note the special syntax to escape special characters as well the brackets ([]), single and double quotes and commas (,) used as part of the JSON format:
# kubectl annotate pod {nginx-pod-name} -n {nginx namespace} netbackup-pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' netbackup-post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]'
Following are the commands to quiesce and un-quiesce the Cassandra database:
# nodetool flush
# nodetool verify
This translates into the following single command to set both the pre and post backup hooks for Cassandra. Note the special syntax to escape special characters as well the brackets ([]), single (''), and double quotes ("") and commas (,) used as part of the JSON format:
# kubectl annotate pod {cassandra-pod} -n {Cassandra namespace} netbackup-pre.hook.backup.velero.io/command='["/bin/bash", "-c", "nodetool flush"]' netbackup-pre.hook.backup.velero.io/container={cassandra-pod} netbackup-post.hook.backup.velero.io/command='["/bin/bash", "-c", "nodetool verify"]' netbackup-post.hook.backup.velero.io/container={cassandra-pod}
Remarque :
The examples provided are only initial guide, and specific requirements for each workload must include the collaboration between backup, workload, and Kubernetes administrators.
At present, Kubernetes do not support an on-error hook. If the user-specified command fails, the backup snapshot does not proceed.
The default timeout value for the command to return an exit status is 30 seconds. But this value can be changed with the following hooks as annotations to the pods:
netbackup-pre.hook.backup.velero.io/timeout=#in-seconds#
netbackup-post.hook.backup.velero.io/timeout=#in-seconds#