Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Additional requirements for replication on Cloud
If any of your sites is on the Cloud, you must configure load balancer service. Ensure that you have specified the cloud service for cloudVendor or remoteCloudVendor while configuring data replication.
Load balancer can be used for both managed and non-managed clouds. lbEnabled and remoteLbEnabled are set in the datareplication yaml. Default value is false (set to true only in case where network traffic goes over load balancer).
In this configuration, load balancer Virtual IP address must be provided as HostAddress (local and/or remote) in the datareplication yaml. The prerequisite for this feature is that the load balancer network Kubernetes service must have the following selector in its spec- cvmaster:true. The sample file below is an example of load balancer service.
Note:
The TCP/UDP ports that Veritas Volume Replicator (VVR) uses must be open on all worker nodes of the cluster to enable communication between primary and secondary site. See Choosing the network ports used by VVR on the Veritas support portal.
Note:
Perform the following steps only if you want to use a Load balancer for data replication.
- Copy the following content into a yaml file and save the file as
/YAML/DR/vvr-lb-svc.yaml.apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" name: vvr-lb-svc namespace: infoscale-vtas spec: allocateLoadBalancerNodePorts: true externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: tcpportone port: 4145 protocol: TCP targetPort: 4145 - name: tcpporttwo port: 8199 protocol: TCP targetPort: 8199 - name: tcpportthree port: 8989 protocol: TCP targetPort: 8989 selector: cvmmaster: "true" sessionAffinity: None type: LoadBalancer
- Run the following command on the master node.
kubectl apply -f /YAML/DR/vvr-lb-svc.yaml
- Run the following command on the master node.
kubectl get svc vvr-lb-svc -n infoscale-vtas
Load Balancer IP address is returned as the output.
- Update the load Balancer IP address and copy the following content into a yaml file and save as
/YAML/DR/loadbalancer.yaml. Veritas Volume Replicator (VVR) requires both ports - TCP and UDP. Hence, a Load balancer service with mixed protocol support (TCP and UDP) is needed.apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-internal:"true" service.beta.kubernetes.io/azure-load-balancer-internal-subnet:"worker" name: vvr-lb-svc namespace: infoscale-vtas spec: loadBalancerIP: <Load Balancer IP address> allocateLoadBalancerNodePorts: true externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: tcpportone port: 4145 protocol: UDP targetPort: 4145 - name: tcpporttwo port: 8199 protocol: UDP targetPort: 8199 - name: tcpportthree port: 8989 protocol: UDP targetPort: 8989 selector: cvmaster: "true" sessionAffinity: None type: LoadBalancer - Run the following command on the master node.
kubectl apply -f /YAML/DR/loadbalancer.yaml