Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Additional requirements for replication on Cloud
If any of your sites is on the Cloud, you must configure load balancer service. Ensure that you have specified the cloud service for cloudVendor or remoteCloudVendor while configuring data replication.
Load balancer can be used for both managed and non-managed clouds. lbEnabled and remoteLbEnabled are set in the datareplication yaml. Default value is false (set to true only in case where network traffic goes over load balancer). For example, if the primary site is on premises and secondary site is on AKS (with a load balancer on front end), lbEnabled must be set to false and remoteLbEnabled must be set to true. In this configuration, load balancer Virtual IP address must be provided as HostAddress (local and/or remote) in the datareplication yaml. The prerequisite for this feature is that the load balancer network Kubernetes service must have the following selector in its spec- cvmaster:true. The sample file below is an example of load balancer service.
Note:
The TCP/UDP ports that Veritas Volume Replicator (VVR) uses must be open on all worker nodes of the cluster to enable communication between primary and secondary site. See Choosing the network ports used by VVR on the Veritas support portal.
- Update and copy the following content into a yaml file and save as
/YAML/DR/loadbalancer.yaml. Set protocol to TCP or UDP. Veritas Volume Replicator (VVR) requires both ports - TCP and UDP. Hence, a Load balancer service with mixed protocol support (TCP and UDP) is needed.apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-internal:"true" service.beta.kubernetes.io/azure-load-balancer-internal-subnet:"worker" name: vvr-lb-svc namespace: infoscale-vtas spec: loadBalancerIP: 172.20.2.9 allocateLoadBalancerNodePorts: true externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: tcpportone port: 4145 protocol: <TCP or UDP> targetPort: 4145 - name: tcpporttwo port: 8199 protocol: <TCP or UDP> targetPort: 8199 - name: tcpportthree port: 8989 protocol: <TCP or UDP> targetPort: 8989 selector: cvmaster: "true" sessionAffinity: None type: LoadBalancer - Run the following command on the bastion node.
kubectl apply -f /YAML/DR/loadbalancer.yaml