Important Update: Cohesity Products Documentation
All Cohesity product documentation are now managed via the Cohesity Docs Portal: https://docs.cohesity.com/HomePage/Content/home.htm. Some documentation available here may not reflect the latest information or may no longer be accessible.
InfoScale™ for Kubernetes 9.1.0 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Introduction
- Prerequisites
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- InfoScale for Kubernetes with Red Hat OpenShift virtualization platform
- Installing InfoScale on a system with Internet connectivity
- Using InfoScale storage with OpenShift virtualization
- InfoScale for Kubernetes support for Two-Node Arbiter (TNA) clusters
- Installing Arctera InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
- Downloading Installer
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on Kubernetes
- Undeploying and uninstalling InfoScale
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Creating node affine volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Installing from OperatorHub by using Command Line Interface (CLI)
Complete the following steps.
Downloading infoscale-yamls-v9.1.0.tar.gz
- Download
infoscale-yamls-v9.1.0.tar.gzfrom the Arctera Download Center. - Unzip and untar the file. A folder
/infoscale-yamls-v9.1.0/openshift/olm/is created and all the files that are required for installation are available in the folder.Note:
An OpenShift cluster already has a namespace
openshift-operators. You can choose to install InfoScale inopenshift-operators. cert-manager (Red Hat-certified) must be already installed for a successful installation of InfoScale. - If you have installed the cert manager in a namespace other than cert-manager, openshift-cert-manager, or openshift-operators; edit subscription yaml for
lico,isoanddroperators and add:name: CERT_MANAGER_NS value: <namespace where cert manager is installed>
Optionally, you can configure a new user - infoscale-admin, that is associated with a Role-based Access Control (RBAC) clusterrole defined in infoscale-admin-role.yaml, to deploy InfoScale and its dependent components. infoscale-admin as a user when configured has clusterwide access to only those resources that are needed to deploy InfoScale and its dependent components such as NFD/Cert Manager in the desired namespaces.
To provide a secure and an isolated environment for InfoScale deployment and associated resources, the namespace that is associated with these resources must be protected from access of all other users (except super user of the cluster), with appropriate RBAC implemented.
Run the following commands on the bastion node to create a new user - infoscale-admin and a new project and assign role or clusterrole to infoscale-admin. You must be logged in as a super user.
Configuring a new user
- oc new-project <New Project name>
A new project is created for InfoScale deployment.
- oc adm policy add-role-to-user admin infoscale-admin
Following output indicates that administrator privileges are assigned to the new user - infoscale-admin within the new project.
clusterrole.rbac.authorization.k8s.io/admin added: "infoscale-admin"
- oc apply -f /infoscale-yamls-v9.1.0/openshift/infoscale-admin-role.yaml
Following output indicates that a clusterrole is created.
clusterrole.rbac.authorization.k8s.io/infoscale-admin-role created
- oc adm policy add-cluster-role-to-user infoscale-admin-role infoscale-admin
The following output indicates that a clusterrole is created and is associated with infoscale-admin.
clusterrole.rbac.authorization.k8s.io/infoscale-admin-role added: "infoscale-admin"
After creating this user, you can login as infoscale-admin to perform all operations that are involved in installing InfoScale, configuring the cluster, and adding nodes.
Installing Operators
- Run the following command on the bastion node.
Note:
Ignore this step if you want to install in
openshift-operators.oc create namespace <Namespace>
Review output similar to the following to verify whether the namespace is created successfully.
namespace/<Namespace> created
- Optionally, if you want to change the default kubelet path, edit
/infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yamlas under.env: - name: KUBELET_PATH value: <enter the new path>The default path is
/var/lib/kubelet.Note:
Do not change the kubelet path after clusters are configured.
- Run the following command on the bastion node to create subscription.
Note:
If you want to install InfoScale in
openshift-operators, edit/infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yaml. Change namespace from <Namespace> to openshift-operators. To install the latest bundle, modify startingCSV to infoscale-sds-operator.v9.1.0.oc create -f /infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yaml
Following output indicates a successful command run.
subscription.operators.coreos.com/infoscale-sds-operator created
- Run the following command on the bastion node to deploy InfoScale licensing operator subscription.
oc create -f /infoscale-yamls-v9.1.0/openshift/olm/licensing-sub.yaml
Following output indicates a successful command run.
subscription.operators.coreos.com /infoscale-licensing-operator-sub created
- Run the following command on the bastion node to create an operator group.
Note:
Ignore this step if you want to install in
openshift-operators.oc create -f /infoscale-yamls-v9.1.0/openshift/olm/infoscale-og.yaml
Following output indicates a successful command run.
operatorgroup.operators.coreos.com/infoscale-opgroup created
- Run the following command on the bastion node.
oc get sub,og -n <Namespace>
Following output indicates a successful command run.
NAME subscription.operators.coreos.com/infoscale-sds-operator
PACKAGE SOURCE CHANNEL infoscale-sds-operator infoscale-sds-operator-catalog fast
NAME AGE operatorgroup.operators.coreos.com/infoscale-sds-opgroup 117s
- Run the following command on the bastion node.
oc get installplan -A
Use installation-name from the output similar to the following output.
NAME NAME CSV APPROVAL APPROVED <Namespace> install-k7hjl cert-manager-operator.v1.18.0 Automatic true <Namespace> install-9v2q5 infoscale-sds-operator.v9.1.0 Manual false <Namespace> install-xhbqx nfd.4.19.0-202510211212 Automatic true
- Run the following command on the bastion node.
Note:
Do not include the angle brackets (< >) in the command.
oc patch installplan --namespace <Namespace> --type merge --patch '{"spec":{"approved":true}}'
Following output indicates a successful command run.
installplan.operators.coreos.com/<installation-name> patched
- Run the following command on the bastion node.
oc get ip -A
Review output similar to the following . Check if
APPROVEDistrue.NAMESPACE NAME CSV APPROVAL APPROVED <Namespace> install-k7hjl cert-manager-operator.v1.18.0 Automatic true <Namespace> install-9v2q5 infoscale-sds-operator.v9.1.0 Manual true <Namespace> install-xhbqx nfd.4.19.0-202510211212 Automatic true
- Run the following command on the bastion node to check the status of csv.
oc get csv
Components which are getting installed or are pending installation are listed as follows:
NAME DISPLAY cert-manager-operator.v1.18.0 cert-manager Operator for Red Hat OpenShift infoscale-licensing-operator.v9.0.1 InfoScale™ Licensing Operator infoscale-sds-operator.v9.1.0 InfoScale™ SDS Operator
VERSION REPLACES PHASE 1.18.0 cert-manager-operator.v1.17.0 Succeeded 9.0.1 Installing 9.1.0 infoscale-sds-operator.v8.0.410 Installing
- Run the following command on the bastion node again till all operators are installed successfully.
oc get csv
Review output as under.
NAME DISPLAY cert-manager-operator.v1.18.0 cert-manager Operator for Red Hat OpenShift infoscale-licensing-operator.v9.0.1 InfoScale™ Licensing Operator infoscale-sds-operator.v9.1.0 InfoScale™ SDS Operator
VERSION REPLACES PHASE 1.18.0 cert-manager-operator.v1.17.0 Succeeded 9.0.1 Succeeded 9.1.0 infoscale-sds-operator.v8.0.410 Succeeded
VERSION REPLACES PHASE 9.1.0 Succeeded 9.1.0 Succeeded 4.12.0-202307170916 Succeeded 1.11-1 Succeeded
- Access your web console. Follow steps 11 to 15 in Installing from OperatorHub by using web console to install NodeFeatureDiscovery.
- Run the following command to check the status of all operator pods in <Namespace>.
Note:
If you have installed in
openshift-operators, run oc get pods -n openshift-operators.oc get pods -n cert-manager; oc get pods -n openshift-nfd
NAME READY STATUS cert-manager-858d87f86b-p2drg 1/1 Running cert-manager-cainjector-7dbf76d5c8-fkcmz 1/1 Running cert-manager-webhook-7894b5b9b4-xzfv6 1/1 Running NAME READY STATUS nfd-controller-manager-7765565bf5-9wxqx 1/1 Running nfd-gc-7f8559ff94-dnfvn 1/1 Running nfd-master-756cc854b7-g8jrt 1/1 Running nfd-worker-bb6wr 1/1 Running nfd-worker-fb6t9 1/1 Running nfd-worker-fvgqq 1/1 Running nfd-worker-sffjw 1/1 Running nfd-worker-vwc8r 1/1 Running
RESTARTS AGE 0 41h 0 41h 0 41h RESTARTS AGE 0 38h 0 38h 0 38h 0 38h 0 38h 0 38h 0 38h 0 38h
Applying Licenses
- Edit /infoscale-yamls-v9.1.0/openshift/vlic_v1_license.yaml for the license edition. Optionally, you can change the license name. The default license edition is Developer. You can change the
licenseEdition. If you want to configure Disaster Recovery (DR), you must have Enterprise as the license edition. To configure multiple InfoScale clusters, you must have Storage or Enterprise edition.See Licensing.
apiVersion: vlic.veritas.com/v1 kind: License metadata: name: license-dev spec: # valid licenseEdition values are Developer, Storage or Enterprise licenseEdition: "Developer" licenseServer: <Optional - IP address of the VIOM server on your system> - Run oc create -f /infoscale-yamls-v9.1.0/openshift/vlic_v1_license.yaml on the bastion node.
- Run oc get licenses on the bastion node to verify whether licenses have been successfully applied.
An output similar to the following indicates that
license_cr.yamlis successfully applied.NAME NAMESPACE LICENSE-EDITION AGE license DEVELOPER 27s
Deploying InfoScale Cluster
- Edit /infoscale-yamls-v9.1.0/openshift/cr.yaml.
--- apiVersion: infoscale.veritas.com/v1 kind: InfoScaleCluster metadata: # Please change cluster name if required name: infoscalecluster-dev namespace: infoscale-vtas spec: # This denotes version of the InfoScale release version: 9.1.0 # (optional) This denotes the user-provisioned ID for InfoScale cluster # The value can range from 1 to 65535 # clusterID: <Cluster_ID> clusterInfo: # Please change worker node names according to your cluster configuration # Mention additional worker node names and corresponding node specific # parameters, as applicable. - nodeName: "<Hostname_of_worker_node>" # (optional) Specifies node IP address(es) to be used for InfoScale cluster. # If omitted, InfoScale chooses available IP address(es) for cluster config. # Please change worker IP address(es) according to your cluster configuration # ip: # - "<IP_Address_1>" # - "<IP_Address_2>" # (optional) Specifies a node-specific list of devices for InfoScale configuration. # # Only one of the following fields can be used at a time: # - `includeDevices`: List of devices to be explicitly included in InfoScale configuration. # - `excludeDevice`: List of devices to be excluded from InfoScale configuration. # # Please update the device names according to your cluster setup. # includeDevices: # - "<Hardware_Path_to_device_to_be_included>" # excludeDevice: # - "<Hardware_Path_to_device_to_be_excluded>" # (optional) Specifies node specific list of devices to be used for fencing purpose # It is sufficient to provide fencing device information from one node # (optional) Specifies node specific list of devices to be used for fencing purpose # It is sufficient to provide fencing device information from one node # Please change device names according to your cluster configuration # fencingDevice: # - "<Hardware_Path_to_device_to_be_used_for_fencing>" - nodeName: "<Hostname_of_worker_node>" # (optional) Specifies node IP address(es) to be used for InfoScale cluster. # If omitted, InfoScale chooses available IP address(es) for cluster config. # Please change worker IP address(es) according to your cluster configuration # ip: # - "<IP_Address_1>" # - "<IP_Address_2>" # (optional) Specifies node specific list of devices to be excluded from # InfoScale configuration. # Please change device names according to your cluster configuration # excludeDevice: # - "<Hardware_Path_to_device_to_be_excluded>" # Please change below customImageRegistry according to your environment # This is mandatory for Kubernetes and Air gapped system deployments # This is optional for OCP deployments # customImageRegistry: "<Custom_Registry_Address>" # (optional) Specifies whether SCSI-3 Persistent Reservation should be enabled # If omitted, SCSI3-PR reserveation will be disabled by default. # Valid values are true or false. # enableScsi3pr : <true/false> # (optional) Specifies whether Disk Group Level Encryption should be enabled # If omitted, Disk Group Level Encryption will be disabled by default. # Valid values are true or false. # encrypted: <true|false> # (optional) Specifies whether Disk Group Level Encryption Key should be same for all volumes in the DG # If omitted, different key will be created for each volume by default. # Valid values are true or false. # sameEncKey: <true|false> # (optional) Specifies whether to create Shared NonFSS Disk Group. # With this option only the full shared disks will be included in Disk Group # If omitted, FSS Disk Group will be created by default. # Valid values are true or false. # isSharedStorage: <true|false>You can add up to 16 nodes. To add IncludeDevices, refer to See Using IncludeDevices for selective storage management.
Note:
Do not enclose parameter values in angle brackets (<>). For example, Primarynode is the name of the first node; for nodeName : , enter nodeName : Primarynode. InfoScale on Kubernetes is a keyless deployment.
- You can choose to rename cr.yaml. If you rename the file, ensure that you use that name in the next step.
Note:
Arctera recommends renaming cr.yaml and maintaining a custom resource for each cluster. The renamed cr.yaml is used to add more nodes to that InfoScale cluster.
- Run the following command on the master node.
oc create -f /infoscale-yamls-v9.1.0/openshift/cr.yaml
- Run the following command on the master node to know the name and namespace of the cluster.
oc get infoscalecluster -A
- Use the namespace from the output similar to the following:
NAMESPACE NAME VERSION infoscale-vtas infoscalecluster-dev 9.1.0
CLUSTERID STATE DISKGROUPS STATUS AGE 1230 Running vrts_kube_dg-1230 Healthy 72m
- Run the following command on the master node to verify whether the pods are created successfully.
oc get pods -n infoscale-vtas
- An output similar to the following indicates a successful creation of nodes.
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-8c5bfcdbd-g9jzp 5/5 Running 0 3m22s infoscale-csi-node-6pt78 2/2 Running 0 3m22s infoscale-csi-node-bczgc 2/2 Running 0 3m22s infoscale-csi-node-cbxkf 2/2 Running 0 3m22s infoscale-csi-node-k7l7w 2/2 Running 0 3m22s infoscale-csi-node-mk48n 2/2 Running 0 3m22s infoscale-fencing-controller-566dc674fb-ghqbg 1/1 Running 0 3m23s infoscale-fencing-enabler-6688s 1/1 Running 0 3m23s infoscale-fencing-enabler-hrqgr 1/1 Running 0 3m23s infoscale-fencing-enabler-n728h 1/1 Running 0 3m23s infoscale-fencing-enabler-qsbdf 1/1 Running 0 3m23s infoscale-fencing-enabler-tm2zk 1/1 Running 0 3m23s infoscale-licensing-operator-786df478b-5snwz 1/1 Running 0 135m infoscale-sds-1230-f836b69ee261cd15-c4nsq 1/1 Running 0 3m23s infoscale-sds-1230-f836b69ee261cd15-fnj6k 1/1 Running 0 3m23s infoscale-sds-1230-f836b69ee261cd15-j58rj 1/1 Running 0 3m23s infoscale-sds-1230-f836b69ee261cd15-m5597 1/1 Running 0 3m23s infoscale-sds-1230-f836b69ee261cd15-nhhm9 1/1 Running 0 3m23s infoscale-sds-operator-5696f66584-vclc4 1/1 Running 0 135m infoscale-toolset-1230-d596bb7bf-4ldqx 1/1 Running 0 3m23s infoscale-toolset-1230-d596bb7bf-9n4zq 1/1 Running 0 3m23s infoscale-toolset-1230-d596bb7bf-jqnx6 1/1 Running 0 3m23s infoscale-toolset-1230-d596bb7bf-r25j5 1/1 Running 0 3m23s infoscale-toolset-1230-d596bb7bf-zkwjr 1/1 Running 0 3m23s
After a successful InfoScale deployment, a disk group is automatically created.
- You can now create Persistent Volumes/Persistent Volume Claims (PV/PVC) by using the corresponding storage class.