Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Considerations for configuring cluster or adding nodes to an existing cluster
You can specify up to 16 worker nodes in cr.yaml. Although cluster configuration is allowed even with one Network Interface Card, Veritas recommends a minimum of two physical links for performance and High Availability (HA). Number of links for each network link must be same on all nodes. Optionally, you can enter node level IP addresses. If IP addresses are not provided, IP addresses of Kubernetes cluster nodes are used.
By default, Flexible Storage Sharing (FSS) disk group is created. If shared storage is the only storage available across nodes you can set isSharedStorage to true and fully shared non-FSS disk group is created while configuring clusters. Mirroring is performed across enclosures thus ensuring redundancies. While using shared storage, Veritas recommends changing the default majority-based fencing to disk-based fencing. With shared storage and majority-based fencing, storage becomes inaccessible when majority of the nodes go down; although all disks are connected to all the nodes. With disk-based fencing, storage is available to applications even when one node is up. To configure disk-based fencing, specify hardware path of the fencing disks for at least one node. Veritas recommends using at least three disks for fencing purpose. Ensure that the number of disks is odd in number.
You can enable encryption at the disk group level or for specific Volumes within the disk group. Encryption is not enabled by default. You can set encryption to true to enable encryption. If you want the same encryption key for all Volumes, you can set sameEnckey to true. For different encryption keys per Volume, set sameEnckey to false.
Note:
For Disaster Recovery (DR) configuration, only Volume level encryption is supported. Disk group level encryption is not supported.