Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Installing InfoScale DR Manager by using YAML
This section informs you how to install and configure Disaster Recovery for your InfoScale cluster by using YAML.
Note:
When you download, unzip, and untar YAML_8.0.200.tar.gz , all files required for installation are available.
Complete the following steps to install the InfoScale DR Manager on the source and the target DR cluster.
Creating application user role
- Run the following command on the bastion node.
oc apply -f YAML/DR/infoscale-dr-admin-role.yaml
- Copy the following content to
YAML/DR/ClusterRoleBinding .yaml.--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: infoscaledr-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: infoscaledr-admin-role subjects: - kind: User name: postgres-admin apiGroup: rbac.authorization.k8s.io
This is an example for a ClusterRoleBinding role.
- Run the following command on the bastion node.
oc apply -f YAML/DR/ClusterRoleBinding.yaml
For DR controller installation and Global Cluster Membership (GCM) configuration, you must use the kubeadmin role only. To configure DR plan and data replication, you can use this role or the kubeadmin role.
- Run the following command on the bastion node of each cluster.
oc apply -f /YAML/DR/OpenShift/dro_deployment.yaml
- Wait till the command execution is complete.
- Run the following command on the bastion node to verify if the deployment is successful.
oc -n infoscale-vtas get pods
See the Status in the output similar to the following
NAME READY STATUS RESTARTS AGE infoscale-dr-manager-xxxx 1/1 Running 0 114m
Status must change from
ContainerCreatingtoRunning. - Run the following commands to configure ExternalIP for DR pod.
Note:
Run these steps only if you want Metallb as the load balancer. If you choose any other load balancer, refer to its documentation for installation and configuration.
oc -n infoscale-vtas expose deployment infoscale-dr-manager --name my-lb-service --type LoadBalancer --protocol TCP --port 14155 --target-port 14155
Here, DR controller uses port 14155 internally to communicate across peer clusters. After a successful installation and configuration, you can verify by running the following command
- oc get svc my-lb-service
An output similar to the following indicates that installation and configuration is successful.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-lb-service LoadBalancer <IP address> <IP address> 14155:14155/TCP 13h
Run this command on both the clusters and verify if installation and configuration is successful. Verify whether
EXTERNAL-IPis accessible from one cluster to the other cluster.