Important Update: Cohesity Products Documentation
All Cohesity product documentation are now managed via the Cohesity Docs Portal: https://docs.cohesity.com/HomePage/Content/home.htm. Some documentation available here may not reflect the latest information or may no longer be accessible.
InfoScale™ for Kubernetes 9.1.0 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Introduction
- Prerequisites
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- InfoScale for Kubernetes with Red Hat OpenShift virtualization platform
- Installing InfoScale on a system with Internet connectivity
- Using InfoScale storage with OpenShift virtualization
- InfoScale for Kubernetes support for Two-Node Arbiter (TNA) clusters
- Installing Arctera InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
- Downloading Installer
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on Kubernetes
- Undeploying and uninstalling InfoScale
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Creating node affine volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Migration and failback considerations
To keep VM migration, failover, and failback smooth, the DR workflow leaves persistent storage on the source cluster even if VMs are removed during a DR event. This protects application data and allows fast, consistent recovery.
Configure each VM's PersistentVolume (PV) with spec: persistentVolumeReclaimPolicy: Retain.
This setting makes sure the PV is not deleted when its PVC or VM is removed during migration or failover.
Keeping the PV lets the VM reattach to its original data during restore or failback. It reduces recovery time and prevents data loss.
If the DR workflow sees that Retain is not set:
It finds the VM's DataVolume (DV).
It locates the related PVs and updates their reclaim policy to Retain.
This keeps failover and failback consistent and safe, even if the original PV was misconfigured.
For best results, create and link DVs and PVs to VirtualMachines using standard OpenShift Virtualization workflows.
Manually created DVs or PVs may not be updated automatically by the DR workflow. They can miss reclaim policy fixes. Using integrated provisioning simplifies recovery and avoids storage inconsistencies during DR operations.