Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
I/O fencing
InfoScale uses majority-based and disk-based I/O fencing to guarantee data protection and provide persistent storage in the Container environment. Fencing ensures that data protection gets highest priority and stops running the systems when a split-brain condition is encountered. The systems thus cannot start services and data is protected. InfoScale checks for the connectivity with each peer nodes periodically while OpenShift or Kubernetes check it for the master to worker nodes.
The OpenShift or Kubernetes cluster performs failover or restart of applications for the nodes that have reached a NotReady state in the OpenShift or Kubernetes cluster. If an application is configured as a statefulset pod, the Container stops failover of such application pods till the node becomes active again. In such scenarios, InfoScale uses the fencing module to ensure that the application pods running on such unreachable nodes cannot access the persistent storage so that OpenShift or Kubernetes can restart these pods on the active cluster without the risk of any data corruption.
When InfoScale cluster is deployed on an OpenShift or Kubernetes, InfoScale uses a custom fencing controller to provide the fencing infrastructure. The custom controller interacts with the InfoScale fencing driver and enables failover in OpenShift or Kubernetes in case of a network split. An agent running on the controller ensures that InfoScale fences out the persistent storage and performs the pod failover for the fenced-out node. It also ensures that the fencing decisions of InfoScale I/O fencing do not conflict with the fencing decisions of the fencing controller.
For deployment in containerized environments, when you install InfoScale by using the product installer, the fencing module is automatically installed and configured in majority mode by default. For the default flexible shared storage (fss) disk group configuration, default majority-based fencing can be used. In case of network split, the I/O fencing module takes fencing decisions based on the number of nodes in a sub-cluster. If majority of the nodes are down, all nodes in the cluster are brought down.
If you are using fully-shared storage, where all disks are connected to all the nodes; Veritas recommends using disk-based fencing. With disk-based fencing, cluster is up even with a single functional node.