Important Update: Cohesity Products Documentation
All Cohesity product documentation are now managed via the Cohesity Docs Portal: https://docs.cohesity.com/HomePage/Content/home.htm. Some documentation available here may not reflect the latest information or may no longer be accessible.
Arctera InfoScale™ for Kubernetes 8.0.410 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional prerequisites for Azure Red Hat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- InfoScale for Kubernetes with Red Hat OpenShift virtualization platform
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Removing and adding back nodes to an Azure Red Hat OpenShift (ARO) cluster
- Installing Arctera InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
- Downloading Installer
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on Kubernetes
- Undeploying and uninstalling InfoScale
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Creating node affine volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Removing nodes from an existing cluster
The remove node operation is supported only for shared storage.
It is not supported in the following cases:
When other nodes (excluding those targeted for removal) in the cluster are in the 'out of cluster' state.
During an upgrade.
In the FSS environment.
Note:
When performing this operation on OpenShift, replace kubectl with oc in the following procedure and run the commands on the bastion node instead of master node.
Complete the following steps to remove nodes from an existing InfoScale cluster:
- Ensure that all the nodes (excluding the nodes that are being removed) must be in a Ready state. Run the following command on the master node to check whether the nodes are ready.
kubectl get nodes
- Ensure that nodes not being removed are in a Joined state. Run the following command on the master node and then review the output.
kubectl describe <Name of the InfosScale Cluster> -n <Namespace>
For example, if worker-node-3 is planned for removal, verify that worker-node-1 and worker-node-2 are in the Joined state by checking their Role status:
Cluster Name: <Name of the InfosScale Cluster> Cluster Nodes: Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Joined,Slave
- The InfoScale cluster must be in a running state. Run the following command on the master node to verify.
kubectl get infoscalecluster -n <Namespace>
See the State in the output similar to the following:
NAME VERSION CLUSTERID STATE DISKGROUPS STATUS AGE <Name of 8.0.410 <Cluster ID> Running <DG Name> <Status> 3d13h the InfoScale Cluster>
- Edit the clusterInfo section of the sample
/infoscale-yamls-v8.0.410/kubernetes/cr.yamlto remove information about the nodes that are planned for removal. - Run the following command on the master node to initiate the remove node operation.
kubectl apply -f /infoscale-yamls-v8.0.410/kubernetes/cr.yaml
- You can run the following command on the master node to check the InfoScale cluster state.
kubectl get infoscalecluster -n <Namespace>
See the state in the output as displayed below. ProcessingRemoveNode indicates the node is getting removed.
NAME VERSION CLUSTERID STATE DISKGROUPS STATUS AGE <Name 8.0.410 <Cluster ID> ProcessingRemoveNode <DG Name> Healthy 3d13h of the InfosScale Cluster>
- Run the following command to verify if pods are being removed. It may take some time for the pods to get deleted.
kubectl get pods -n <Namespace> -o wide
- Run the following command on the master node to verify if the cluster is 'Running' & 'Healthy'.
kubectl get infoscalecluster -n <Namespace>
Verify the output:
NAME VERSION CLUSTERID STATE DISKGROUPS STATUS AGE <Name of the InfoScale cluster> 8.0.410 <Cluster ID> Running <DG Name> Healthy 3d13h