Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Undeploying and uninstalling InfoScale by using CLI
For a custom namespace, complete the following steps to undeploy and uninstall InfoScale.
Note:
If you want to retain the disk group and re-use the Volumes you created, ensure that you note the ClusterID before undeploying and uninstalling InfoScale. You must use this ClusterID while re-installing InfoScale. Else, clean the disks before uninstalling.
- Run the following command on the bastion node to undeploy.
oc delete -f /YAML/OpenShift/cr.yaml
Note:
cr.yamlmust be the same that was used for deployment. - Delete NodeFeatureDiscovery custom resource by logging in to web console
- Run the following command on the bastion node to delete license.
oc delete -f /YAML/OpenShift/license_cr.yaml
- Run the following command on the bastion node to delete the operator group.
oc delete og -n infoscale-vtas infoscale-opgroup
If InfoScale is installed in openshift-operators, run .
oc delete og -n openshift-operators infoscale-opgroup
- Run the following command on the bastion node to delete subscription for the InfoScale operator.
Note:
Ignore this step if you have installed in
openshift-operators.oc delete sub -n infoscale-vtas -all
- Run the following commands on the bastion node to delete the ClusterServiceVersion.
oc get csv |egrep "license|Node Feature|Infoscale|Special Resource"|awk '{print $1}'
Use the csv_name and clusterserviceversion returned from this command in the following commands.
oc delete csv <csv_name> -n infoscale-vtas
Note:
Ignore this step if you have installed in
openshift-operators.oc delete clusterserviceversion <csv_name>
Note:
While entering the command, ensure that you do not enclose the csv_name and crd_name in angle brackets.
- Run the following commands on the bastion node the delete the CRDs (Custom Resource Definitions)
oc get crd | egrep 'cert-manager|special|info|nfd
All CRDs are listed. Use the names of the listed CRDs in the following commands to delete the CRDs one -by-one.
oc delete crd <crd_name>
- Run the following command on the bastion node to delete the namespace. Ignore if you had installed in
openshift-operators.oc delete ns infoscale-vtas
Note:
After uninstallation, ensure that stale InfoScale kernel modules (vxio/vxdmp/veki/vxspec/vxfs/odm/glm/gms) do not remain loaded on any of the worker nodes. Rebooting a worker node deletes all such modules. When fencing is configured, certain keys are configured. Those must also be deleted after uninstallation. Run ./clearkeys.sh <Path to the first disk>, <Path to the second disk>,... to remove stale keys that might have remained.