Arctera InfoScale™ for Kubernetes 8.0.400 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional prerequisites for Azure Red Hat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Removing and adding back nodes to an Azure Red Hat OpenShift (ARO) cluster
- Installing Arctera InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
- Downloading Installer
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on Kubernetes
- Undeploying and uninstalling InfoScale
- Installing Arctera InfoScale on RKE2
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on RKE2 cluster
- Downloading Installer
- Tagging the InfoScale images on RKE2
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on RKE2
- Undeploying and uninstalling InfoScale
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Creating node affine volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
PV rebuild
The PV Rebuild feature enables users to restore Persistent Volumes (PVs) and Volume Snapshot Contents (VSCs) when reinstalling Kubernetes (K8s) or OpenShift (OCP) instead of performing an upgrade. This feature ensures data retention during an InfoScale cluster rebuild and is also useful in cases where the K8s/OCP setup is accidentally destroyed.
By default, the pv-rebuild feature is enabled.
Check the PV rebuild status
To verify whether PV rebuild is enabled, run the following command:
# oc describe infoscalecluster -n infoscale-vtas <infoscale_cluster_name> "infoscale.veritas.com/pv-rebuild"
Disable the PV rebuild feature
To disable the PV rebuild feature, use the following command:
# oc annotate infoscalecluster -n infoscale-vtas <infoscale_cluster_name> "infoscale.veritas.com/pv-rebuild"="disabled" - overwrite
Note:
Ensure that the cluster ID and cluster name remain the same during the InfoScale cluster rebuild.
- Restore PVs and PVCs
After the InfoScale cluster rebuild, PVs will be in an available state.
To bind PVs to Persistent Volume Claims (PVCs), reapply the same YAML files used for PVC creation.
Once applied, the PVCs and PVs will transition to the Bound state.
- Restore VSCs to Volume Snapshots (VS)
Modify the existing Volume Snapshot (VS) YAML file with the following change:
volumeSnapshotContentName: <vsc_name>
Ensure that the VS and VSC names match correctly.
Retrieve the names of all Volume Snapshot (VS) and Volume Snapshot Contents (VSCs) using the following command:
# oc get vsc -A