Important Update: Cohesity Products Documentation


All Cohesity product documentation are now managed via the Cohesity Docs Portal: https://docs.cohesity.com/HomePage/Content/home.htm. Some documentation available here may not reflect the latest information or may no longer be accessible.

InfoScale™ for Kubernetes 9.1.0 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.1)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1.  
      Setting up the private network
    2.  
      Guidelines for setting the media speed for LLT interconnects
    3.  
      Guidelines for setting the maximum transmission unit (MTU) for LLT
    4.  
      Synchronizing time settings on cluster nodes
    5.  
      Securing your InfoScale deployment
    6.  
      Configuring kdump
  4. Installing Arctera InfoScale on OpenShift
    1.  
      Introduction
    2. Prerequisites
      1.  
        Enabling kubelet inhibitor with systemd
      2.  
        Preflight checks before the product deployment
    3.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    4.  
      Creating multiple InfoScale clusters
    5.  
      InfoScale for Kubernetes with Red Hat OpenShift virtualization platform
    6. Installing InfoScale on a system with Internet connectivity
      1.  
        Using IncludeDevices for selective storage management
      2.  
        Installing from OperatorHub by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      4.  
        Installing by using YAML
    7.  
      Using InfoScale storage with OpenShift virtualization
    8. InfoScale for Kubernetes support for Two-Node Arbiter (TNA) clusters
      1.  
        Prerequisites and compatibility requirements
      2.  
        Configuration
      3.  
        Installation
  5. Installing Arctera InfoScale on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    4.  
      Downloading Installer
    5. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    6.  
      Applying licenses
    7.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    8.  
      Creating multiple InfoScale clusters
    9. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
    10.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  7. Configuring KMS-based encryption on an Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
    13.  
      Creating ephemeral volumes
    14.  
      Creating node affine volumes
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Configuring DNS
      4.  
        Configuring Disaster Recovery Plan
    7. Tech Preview: Disaster recovery in OpenShift virtualization environment
      1.  
        Capturing and replicating VM resources
      2.  
        Graceful workload shutdown
      3. Prerequisites and compatibility requirements
        1.  
          Required operators on the DR site
      4.  
        Deployment behavior
      5.  
        Migration and failback considerations
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding nodes to an existing cluster
    2.  
      Removing nodes from an existing cluster
    3.  
      Adding storage to an InfoScale cluster
    4.  
      Managing licenses
    5.  
      Monitoring InfoScale
    6.  
      Configuring Alerts for monitoring InfoScale
    7.  
      Draining InfoScale nodes
    8.  
      Using InfoScale toolset
    9.  
      Changing the peerinact value in a cluster
    10.  
      PV rebuild
  14. Troubleshooting
    1.  
      Adding a sort data collector utility
    2.  
      Collecting logs by using SORT Data Collector
    3.  
      Approving certificate signing requests (csr) for OpenShift
    4.  
      Cert Renewal related
    5.  
      PVC deletions after PV rebuilds
    6.  
      Troubleshooting when adding storage to InfoScale cluster
    7.  
      Known Issues
    8.  
      Limitations

Installing from OperatorHub by using Command Line Interface (CLI)

Complete the following steps.

Downloading infoscale-yamls-v9.1.0.tar.gz

  1. Download infoscale-yamls-v9.1.0.tar.gz from the Arctera Download Center.
  2. Unzip and untar the file. A folder /infoscale-yamls-v9.1.0/openshift/olm/ is created and all the files that are required for installation are available in the folder.

    Note:

    An OpenShift cluster already has a namespace openshift-operators. You can choose to install InfoScale in openshift-operators. cert-manager (Red Hat-certified) must be already installed for a successful installation of InfoScale.

  3. If you have installed the cert manager in a namespace other than cert-manager, openshift-cert-manager, or openshift-operators; edit subscription yaml for lico, iso and dr operators and add:
    name: CERT_MANAGER_NS
    value: <namespace where cert manager is installed>

Optionally, you can configure a new user - infoscale-admin, that is associated with a Role-based Access Control (RBAC) clusterrole defined in infoscale-admin-role.yaml, to deploy InfoScale and its dependent components. infoscale-admin as a user when configured has clusterwide access to only those resources that are needed to deploy InfoScale and its dependent components such as NFD/Cert Manager in the desired namespaces.

To provide a secure and an isolated environment for InfoScale deployment and associated resources, the namespace that is associated with these resources must be protected from access of all other users (except super user of the cluster), with appropriate RBAC implemented.

Run the following commands on the bastion node to create a new user - infoscale-admin and a new project and assign role or clusterrole to infoscale-admin. You must be logged in as a super user.

Configuring a new user

  1. oc new-project <New Project name>

    A new project is created for InfoScale deployment.

  2. oc adm policy add-role-to-user admin infoscale-admin

    Following output indicates that administrator privileges are assigned to the new user - infoscale-admin within the new project.

    clusterrole.rbac.authorization.k8s.io/admin added: "infoscale-admin"
  3. oc apply -f /infoscale-yamls-v9.1.0/openshift/infoscale-admin-role.yaml

    Following output indicates that a clusterrole is created.

    clusterrole.rbac.authorization.k8s.io/infoscale-admin-role created
  4. oc adm policy add-cluster-role-to-user infoscale-admin-role infoscale-admin

    The following output indicates that a clusterrole is created and is associated with infoscale-admin.

    clusterrole.rbac.authorization.k8s.io/infoscale-admin-role added:
              "infoscale-admin"
    

After creating this user, you can login as infoscale-admin to perform all operations that are involved in installing InfoScale, configuring the cluster, and adding nodes.

Installing Operators

  1. Run the following command on the bastion node.

    Note:

    Ignore this step if you want to install in openshift-operators.

    oc create namespace <Namespace>

    Review output similar to the following to verify whether the namespace is created successfully.

    namespace/<Namespace> created
  2. Optionally, if you want to change the default kubelet path, edit /infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yaml as under.
    env:
    
       - name: KUBELET_PATH
    
         value: <enter the new path>

    The default path is /var/lib/kubelet.

    Note:

    Do not change the kubelet path after clusters are configured.

  3. Run the following command on the bastion node to create subscription.

    Note:

    If you want to install InfoScale in openshift-operators, edit /infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yaml. Change namespace from <Namespace> to openshift-operators. To install the latest bundle, modify startingCSV to infoscale-sds-operator.v9.1.0.

    oc create -f /infoscale-yamls-v9.1.0/openshift/olm/infoscale-sub.yaml

    Following output indicates a successful command run.

    subscription.operators.coreos.com/infoscale-sds-operator created
  4. Run the following command on the bastion node to deploy InfoScale licensing operator subscription.

    oc create -f /infoscale-yamls-v9.1.0/openshift/olm/licensing-sub.yaml

    Following output indicates a successful command run.

    subscription.operators.coreos.com
      /infoscale-licensing-operator-sub created
  5. Run the following command on the bastion node to create an operator group.

    Note:

    Ignore this step if you want to install in openshift-operators.

    oc create -f /infoscale-yamls-v9.1.0/openshift/olm/infoscale-og.yaml

    Following output indicates a successful command run.

    operatorgroup.operators.coreos.com/infoscale-opgroup created
  6. Run the following command on the bastion node.

    oc get sub,og -n <Namespace>

    Following output indicates a successful command run.

     
    NAME
    subscription.operators.coreos.com/infoscale-sds-operator
    PACKAGE                  SOURCE                           CHANNEL
    infoscale-sds-operator   infoscale-sds-operator-catalog   fast
    
    NAME                                                       AGE
    operatorgroup.operators.coreos.com/infoscale-sds-opgroup   117s
    
  7. Run the following command on the bastion node.

    oc get installplan -A

    Use installation-name from the output similar to the following output.

    NAME         NAME            CSV                            APPROVAL   APPROVED
    
    <Namespace>  install-k7hjl   cert-manager-operator.v1.18.0  Automatic  true
    
    <Namespace>  install-9v2q5   infoscale-sds-operator.v9.1.0  Manual  false
    
    <Namespace>  install-xhbqx   nfd.4.19.0-202510211212        Automatic  true
  8. Run the following command on the bastion node.

    Note:

    Do not include the angle brackets (< >) in the command.

    oc patch installplan --namespace <Namespace> --type merge --patch '{"spec":{"approved":true}}'

    Following output indicates a successful command run.

    installplan.operators.coreos.com/<installation-name> patched
  9. Run the following command on the bastion node.

    oc get ip -A

    Review output similar to the following . Check if APPROVED is true.

    
    NAMESPACE     NAME            CSV                             APPROVAL   APPROVED
    
    <Namespace>   install-k7hjl   cert-manager-operator.v1.18.0   Automatic  true
    
    <Namespace>   install-9v2q5   infoscale-sds-operator.v9.1.0   Manual  true
    
    <Namespace>   install-xhbqx   nfd.4.19.0-202510211212         Automatic  true
  10. Run the following command on the bastion node to check the status of csv.

    oc get csv

    Components which are getting installed or are pending installation are listed as follows:

     
    NAME                                  DISPLAY 
    cert-manager-operator.v1.18.0         cert-manager Operator for Red Hat OpenShift
    infoscale-licensing-operator.v9.0.1   InfoScale™ Licensing Operator
    infoscale-sds-operator.v9.1.0         InfoScale™ SDS Operator
    VERSION   REPLACES                          PHASE
    1.18.0    cert-manager-operator.v1.17.0     Succeeded
    9.0.1                                       Installing
    9.1.0     infoscale-sds-operator.v8.0.410   Installing
  11. Run the following command on the bastion node again till all operators are installed successfully.

    oc get csv

    Review output as under.

    NAME                                 DISPLAY                                    
    
    cert-manager-operator.v1.18.0        cert-manager Operator for Red Hat OpenShift
    
    infoscale-licensing-operator.v9.0.1  InfoScale™ Licensing Operator
    
    infoscale-sds-operator.v9.1.0  InfoScale™ SDS Operator
    
    
    VERSION   REPLACES                          PHASE
    
    1.18.0    cert-manager-operator.v1.17.0     Succeeded
    
    9.0.1                                       Succeeded
    
    9.1.0     infoscale-sds-operator.v8.0.410   Succeeded
    
    
    VERSION               REPLACES      PHASE
    9.1.0                             Succeeded
    9.1.0                             Succeeded
    4.12.0-202307170916                 Succeeded
    1.11-1                              Succeeded
    
  12. Access your web console. Follow steps 11 to 15 in Installing from OperatorHub by using web console to install NodeFeatureDiscovery.
  13. Run the following command to check the status of all operator pods in <Namespace>.

    Note:

    If you have installed in openshift-operators, run oc get pods -n openshift-operators.

    oc get pods -n cert-manager; oc get pods -n openshift-nfd

    
    NAME                                       READY   STATUS
    
    cert-manager-858d87f86b-p2drg              1/1     Running 
    
    cert-manager-cainjector-7dbf76d5c8-fkcmz   1/1     Running 
    
    cert-manager-webhook-7894b5b9b4-xzfv6      1/1     Running
    
    NAME                                      READY   STATUS
    
    nfd-controller-manager-7765565bf5-9wxqx   1/1     Running
    
    nfd-gc-7f8559ff94-dnfvn                   1/1     Running
    
    nfd-master-756cc854b7-g8jrt               1/1     Running
    
    nfd-worker-bb6wr                          1/1     Running
    
    nfd-worker-fb6t9                          1/1     Running
    
    nfd-worker-fvgqq                          1/1     Running
    
    nfd-worker-sffjw                          1/1     Running 
    
    nfd-worker-vwc8r                          1/1     Running
    
    RESTARTS   AGE
    
    0          41h
    
    0          41h
    
    0          41h
    
    RESTARTS   AGE
    
    0          38h
    
    0          38h
    
    0          38h
    
    0          38h
    
    0          38h
    
    0          38h
    
    0          38h
    
    0          38h

Applying Licenses

  1. Edit /infoscale-yamls-v9.1.0/openshift/vlic_v1_license.yaml for the license edition. Optionally, you can change the license name. The default license edition is Developer. You can change the licenseEdition. If you want to configure Disaster Recovery (DR), you must have Enterprise as the license edition. To configure multiple InfoScale clusters, you must have Storage or Enterprise edition.

    See Licensing.

    apiVersion: vlic.veritas.com/v1
    kind: License
    metadata:
      name: license-dev
    spec:
      # valid licenseEdition values are Developer, 
                 Storage or Enterprise
      licenseEdition: "Developer"
      licenseServer: <Optional - IP address of
                      the VIOM server on your system>
    
  2. Run oc create -f /infoscale-yamls-v9.1.0/openshift/vlic_v1_license.yaml on the bastion node.
  3. Run oc get licenses on the bastion node to verify whether licenses have been successfully applied.

    An output similar to the following indicates that license_cr.yaml is successfully applied.

    NAME          NAMESPACE   LICENSE-EDITION   AGE
    license                   DEVELOPER         27s
    
    

Deploying InfoScale Cluster

  1. Edit /infoscale-yamls-v9.1.0/openshift/cr.yaml.
    ---
    apiVersion: infoscale.veritas.com/v1
    kind: InfoScaleCluster
    metadata:
      # Please change cluster name if required
      name: infoscalecluster-dev
      namespace: infoscale-vtas
    
    spec:
    
      # This denotes version of the InfoScale release
      version: 9.1.0
    
      # (optional) This denotes the user-provisioned ID for InfoScale cluster
      # The value can range from 1 to 65535
    
      # clusterID: <Cluster_ID>
    
      clusterInfo:
    
      # Please change worker node names according to your cluster configuration
      # Mention additional worker node names and corresponding node specific
      # parameters, as applicable.
    
      - nodeName: "<Hostname_of_worker_node>"
    
        # (optional) Specifies node IP address(es) to be used for InfoScale cluster.
        # If omitted, InfoScale chooses available IP address(es) for cluster config.
        # Please change worker IP address(es) according to your cluster configuration
    
        # ip:
        # - "<IP_Address_1>"
        # - "<IP_Address_2>"
    
        # (optional) Specifies a node-specific list of devices for
     InfoScale configuration.
        #
        # Only one of the following fields can be used at a time:
        # - `includeDevices`: List of devices to be explicitly included in
     InfoScale configuration.
        # - `excludeDevice`: List of devices to be excluded from
     InfoScale configuration.
        #
        # Please update the device names according to your cluster setup.
    
        # includeDevices:
        # - "<Hardware_Path_to_device_to_be_included>"
    
        # excludeDevice:
        # - "<Hardware_Path_to_device_to_be_excluded>"
    
        # (optional) Specifies node specific list of devices to be used
     for fencing purpose
        # It is sufficient to provide fencing device information from one node
        # (optional) Specifies node specific list of devices to be used
     for fencing purpose
        # It is sufficient to provide fencing device information from one node
        # Please change device names according to your cluster configuration
    
        # fencingDevice:
        # - "<Hardware_Path_to_device_to_be_used_for_fencing>"
    
      - nodeName: "<Hostname_of_worker_node>"
    
        # (optional) Specifies node IP address(es) to be used for
     InfoScale cluster.
        # If omitted, InfoScale chooses available IP address(es) for cluster config.
        # Please change worker IP address(es) according to your
     cluster configuration
    
        # ip:
        # - "<IP_Address_1>"
        # - "<IP_Address_2>"
    
        # (optional) Specifies node specific list of devices to be excluded from
        # InfoScale configuration.
        # Please change device names according to your cluster configuration
    
        # excludeDevice:
        # - "<Hardware_Path_to_device_to_be_excluded>"
    
      # Please change below customImageRegistry according to your environment
      # This is mandatory for Kubernetes and Air gapped system deployments
      # This is optional for OCP deployments
    
      # customImageRegistry: "<Custom_Registry_Address>"
    
      # (optional) Specifies whether SCSI-3 Persistent Reservation
     should be enabled
      # If omitted, SCSI3-PR reserveation will be disabled by default.
      # Valid values are true or false.
    
      # enableScsi3pr : <true/false>
    
      # (optional) Specifies whether Disk Group Level Encryption
     should be enabled
      # If omitted, Disk Group Level Encryption will be disabled
     by default.
      # Valid values are true or false.
    
      # encrypted: <true|false>
    
      # (optional) Specifies whether Disk Group Level Encryption Key
     should be same for all volumes in the DG
      # If omitted, different key will be created for each volume by
     default.
      # Valid values are true or false.
    
      # sameEncKey: <true|false>
    
      # (optional) Specifies whether to create Shared NonFSS Disk Group.
      # With this option only the full shared disks will be included
     in Disk Group
      # If omitted, FSS Disk Group will be created by default.
      # Valid values are true or false.
    
      # isSharedStorage: <true|false>
    

    You can add up to 16 nodes. To add IncludeDevices, refer to See Using IncludeDevices for selective storage management.

    Note:

    Do not enclose parameter values in angle brackets (<>). For example, Primarynode is the name of the first node; for nodeName : , enter nodeName : Primarynode. InfoScale on Kubernetes is a keyless deployment.

  2. You can choose to rename cr.yaml. If you rename the file, ensure that you use that name in the next step.

    Note:

    Arctera recommends renaming cr.yaml and maintaining a custom resource for each cluster. The renamed cr.yaml is used to add more nodes to that InfoScale cluster.

  3. Run the following command on the master node.

    oc create -f /infoscale-yamls-v9.1.0/openshift/cr.yaml

  4. Run the following command on the master node to know the name and namespace of the cluster.

    oc get infoscalecluster -A

  5. Use the namespace from the output similar to the following:
    NAMESPACE       NAME                  VERSION
    infoscale-vtas  infoscalecluster-dev  9.1.0
    
    CLUSTERID  STATE    DISKGROUPS         STATUS   AGE
    1230       Running  vrts_kube_dg-1230  Healthy  72m
    
  6. Run the following command on the master node to verify whether the pods are created successfully.

    oc get pods -n infoscale-vtas

  7. An output similar to the following indicates a successful creation of nodes.
     
    
    NAME                                            READY   STATUS    RESTARTS   AGE
    
    infoscale-csi-controller-8c5bfcdbd-g9jzp        5/5     Running   0          3m22s
    
    infoscale-csi-node-6pt78                        2/2     Running   0          3m22s
    
    infoscale-csi-node-bczgc                        2/2     Running   0          3m22s
    
    infoscale-csi-node-cbxkf                        2/2     Running   0          3m22s
    
    infoscale-csi-node-k7l7w                        2/2     Running   0          3m22s
    
    infoscale-csi-node-mk48n                        2/2     Running   0          3m22s
    
    infoscale-fencing-controller-566dc674fb-ghqbg   1/1     Running   0          3m23s
    
    infoscale-fencing-enabler-6688s                 1/1     Running   0          3m23s
    
    infoscale-fencing-enabler-hrqgr                 1/1     Running   0          3m23s
    
    infoscale-fencing-enabler-n728h                 1/1     Running   0          3m23s
    
    infoscale-fencing-enabler-qsbdf                 1/1     Running   0          3m23s
    
    infoscale-fencing-enabler-tm2zk                 1/1     Running   0          3m23s
    
    infoscale-licensing-operator-786df478b-5snwz    1/1     Running   0          135m
    
    infoscale-sds-1230-f836b69ee261cd15-c4nsq       1/1     Running   0          3m23s
    
    infoscale-sds-1230-f836b69ee261cd15-fnj6k       1/1     Running   0          3m23s
    
    infoscale-sds-1230-f836b69ee261cd15-j58rj       1/1     Running   0          3m23s
    
    infoscale-sds-1230-f836b69ee261cd15-m5597       1/1     Running   0          3m23s
    
    infoscale-sds-1230-f836b69ee261cd15-nhhm9       1/1     Running   0          3m23s
    
    infoscale-sds-operator-5696f66584-vclc4         1/1     Running   0          135m
    
    infoscale-toolset-1230-d596bb7bf-4ldqx          1/1     Running   0          3m23s
    
    infoscale-toolset-1230-d596bb7bf-9n4zq          1/1     Running   0          3m23s
    
    infoscale-toolset-1230-d596bb7bf-jqnx6          1/1     Running   0          3m23s
    
    infoscale-toolset-1230-d596bb7bf-r25j5          1/1     Running   0          3m23s
    
    infoscale-toolset-1230-d596bb7bf-zkwjr          1/1     Running   0          3m23s

    After a successful InfoScale deployment, a disk group is automatically created.

  8. You can now create Persistent Volumes/Persistent Volume Claims (PV/PVC) by using the corresponding storage class.

    See Adding nodes to an existing cluster.

    See Removing nodes from an existing cluster.