Veritas InfoScale™ for Kubernetes Environments 8.0.100 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
 
- Installing Veritas InfoScale on Kubernetes
- Tech Preview: Configuring KMS-based Encryption on an OpenShift cluster
- Tech Preview: Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment- CSI plugin deployment
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
 
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Prerequisites
- Be ready with the following information - - Names of all the nodes. - Note: - Run kubectl get nodes -o wide on the master node to obtain Names and IP addresses of the nodes. - Use - NAMEand- INTERNAL-IPfrom the output similar to the following -- NAME STATUS ROLES AGE VERSION INTERNAL-IP k8s-cp-1.lab.k8s.lan Ready master 75d v1.20.0+558d959 192.168.22.201 k8s-cp-2.lab.k8s.lan Ready master 75d v1.20.0+558d959 192.168.22.202 k8s-cp-3.lab.k8s.lan Ready master 75d v1.20.0+558d959 192.168.22.203 k8s-w-1.lab.k8s.lan Ready worker 75d v1.20.0+558d959 192.168.22.211 
- Operating system device path of the disks which are being managed by other storage vendors that need to be excluded from InfoScale disk group. 
- Optionally if you want to exclude boot disks, device path to the boot disks. - Note: - Veritas recommends excluding boot disks. 
- Custom Registry address to set up registry where InfoScale images are pushed. 
 
- Ensure that all nodes are synchronized with the NTP Server. 
- Reserve network ports for exclusive use of InfoScale as under - - Component - Port - LLT over UDP - Serially onwards 50000 (as many as configured LLT links) - VVR (Needed only if you want to configure DR) - 4145 (UDP), 8199 (TCP), 8989 (TCP) 
- Add local or shared storage to all the worker nodes before you proceed with the deployment. 
- Ensure that stale InfoScale kernel modules ( - vxio,- vxdmp,- veki,- vxspec,- vxfs,- odm,- glm,- gms) from previous installation do not exist on any of the worker nodes.- Note: - You can reboot a worker node to unload all stale InfoScale kernel modules. 
- Hostnames of the InfoScale nodes must exactly match with the fully qualified domain names(FQDN) of the Kubernetes nodes. 
- SE Linux is disabled on all the nodes 
- Kubernetes cluster must be configured by using IPv4 only.