NetBackup™ Deployment Guide for Azure Kubernetes Services (AKS) Cluster
- Introduction to NetBackup on AKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- About primary server CR and media server CR
- Upgrading NetBackup
- Deploying Snapshot Manager
- Migration and upgrade of Snapshot Manager
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager deployment
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from AKS
- Uninstalling Snapshot Manager
- Troubleshooting
- Appendix A. CR template
Migration and upgrade of Snapshot Manager
Users can manually migrate Snapshot Manager registered with NetBackup to Kubernetes Service cluster environment by performing the following steps:
Disable Snapshot Manager from NetBackup.
Stop services on the Snapshot Manager VM.
Create and attach a disk to the VM which will be used as PV for mongoDB:
Note:
Ensure that the minimum size of the new attached disk must not be less than the size of
/cloudpoint/mongodb
folder. The disk must be in the same region as that of the Snapshot Manager VM. Disk should not be partitioned.Copy contents from
/cloudpoint/mongodb
to the new disk.Note down the resource ID of this disk which can be obtained from portal/az CLI. Format of the resource ID:/subscriptions/<subscription id>/resourceGroups/<disk RG name>/providers/Microsoft.Compute/disks/<disk name>
Copy
flexsnap.conf
andbp.conf
configuration files to the VM from where cluster is accessible.Detach the disk from VM and move it to cluster resources group (RG):
MC_<clusterRG>_<cluster name>_<cluster_region>
From VM perform the following steps:
Create configuration maps using the following command:
kubectl create cm agentconf --from-file=<path to flexsnap.conf > -n <application namespace>
kubectl create cm nbuconf --from-file=<path to bp.conf> -n <application namespace>
Create Snapshot Manager Secrets using the following command:
kubectl create secret generic cp-creds --from-literal=username='<username>' --from-literal=password='<password>'
Create mongodb Persistent Volume and Persistent Volume Claim as follows:
apiVersion: v1 kind: PersistentVolume metadata: name: <pv name> spec: capacity: storage: <Size of the disk> accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: <Storage class name> claimRef: name: mongodb-pvc namespace: < environment namespace> csi: driver: disk.csi.azure.com readOnly: false volumeHandle: <Resource ID of the disk> volumeAttributes: fsType: <FS type> apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongodb-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: <Disk size> volumeName: <PV-name> storageClassName: <storage class name>
Note:
After applying the above yaml's, ensure that the newly created PVC is bound to PV:
#kubectl get pvc -n <application-namespace> | grep mongodb
Ensure that the
flexsnap-operator
is deployed and running before applying the Snapshot Manager section withenvironment.yaml
file.Edit the
environment.yaml
file to upgrade NetBackup (primary/media/MSDP) and add section for Snapshot Manager as follows:cpServer: - name: cp-cluster-deployment containerRegistry: acr.azurecr.io credential: secretName: cp-creds networkLoadBalancer: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" ipAddr: 1.2.3.4 fqdn: cpserver.example.com storage: log: capacity: 10Gi storageClassName: standard data: capacity: <Disk size mentioned in mongodb pv> storageClassName: <<Storage class name mentioned in mongodb pv> nodeSelector: controlPlane: nodepool: cpcontrol1 labelKey: cp-node-label labelValue: cpcontrol1 dataPlane: nodepool: cpdata1 labelKey: cp-node-label labelValue: cpdata1
Apply the
environment.yaml
file using the following command:kubectl apply -f <path to environment.yaml>
Re-register the Snapshot Manger from WebUI if the Snapshot Manager name (fqdn/ip) is same as VM deployment.
:
User can update few parameters on the existing deployed Snapshot Manager by making changes in the
section ofenvironment.yaml
file and apply it.Only
and fields can be changed in section of CR. For update operation to work, set the value of parameter to in the storage classes used.:
If there is a change in the
tag, then it would be considered as a Snapshot Manager upgrade. For upgrade only few parameters related to Snapshot Manager can be modified.
The following table lists the parameters that can be modified during update/upgrade of Snapshot Manager:
Parameters | Edit during update | Edit during upgrade |
---|---|---|
resourceNamePrefix | No | No |
tag | No | Yes |
containerRegistry | No | Yes |
credential: secretName | No | No |
networkLoadBalancer: annotations | No | Yes |
networkLoadBalancer: fqdn | No | No |
networkLoadBalancer: ipAddr | No | Yes |
data.capacity | Yes | Yes |
data.storageClassName | No | No |
log.capacity | Yes | Yes |
log.storageClassName | No | No |
cpServer.NodeSelector.ControlPlane | No | Yes |
cpServer.NodeSelector.DataPlane | No | Yes |