Veritas InfoScale™ for Kubernetes Environments 8.0.230 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Undeploying and uninstalling InfoScale
You can run the following command to undeploy InfoScale on your OpenShift cluster. Additionally, see Deleting Operators from a cluster to ensure a clean undeployment.
Note:
If you want to retain the disk group and re-use the Volumes you created, ensure that you note the ClusterID
before undeploying and uninstalling InfoScale. You must use this ClusterID
while re-installing InfoScale. Else, clean the disks before uninstalling.
Run the following commands on the bastion node
oc delete -f /YAML/OpenShift/cr.yaml
The commands to clean up InfoScale components like the Operator, SR, and SRO are as under
Note:
Run these commands only after all InfoScale pods are terminated.
oc delete -f /YAML/OpenShift/iso.yaml
oc delete -f /YAML/OpenShift/license_cr.yaml
oc delete -f /YAML/OpenShift/lico.yaml
oc delete -f /YAML/OpenShift/sr.yaml
oc delete -f /YAML/OpenShift/sro.yaml
Note:
After uninstallation, ensure that stale InfoScale kernel modules (vxio
/vxdmp
/veki
/vxspec
/vxfs
/odm
/glm
/gms
) do not remain loaded on any of the worker nodes. Rebooting a worker node deletes all such modules. When fencing is configured, certain keys are configured. Those must also be deleted after uninstallation. Run ./clearkeys.sh <Path to the first disk>, <Path to the second disk>,... to remove stale keys that might have remained.
Reconfiguring InfoScale cluster for upgrading OCP to 4.12.31
Complete the following steps to migrate InfoScale 8.0.210/8.0.220 on OCP 4.10.x or 4.11.x to OCP 4.12.31
- Downscale the InfoScale-PVC bound replica applications count to 0.
- Note the active replica count and
ClusterID
. - Delete the InfoScale cluster resource.
- If the SDS pods are in a 'TERMINATING' state, run oc delete po <SDS pod name> -n infoscale-vtas --force --grace-period=0
- Delete license resource, licensing operator, InfoScale sds operator, special resource, and special resource operator.
- Delete nfd operator from
infoscale-vtas
, if it is present.Note:
Refer to the earlier sections to know how to delete operators.
- If applications or PVCs are active in
infoscale-vtas
, deleteRelevant resources of NFD operator
Operator group for InfoScale-sds installation.
Old InfoScale install plans
- Delete
infoscale-vtas
, if PVCs are not active. - Delete the CRDs for InfoScale, special resource, and NFD.
- Upgrade OpenShift to 4.12.31.
- Refer to the earlier sections and re-install InfoScale. Ensure that you use the same
ClusterID
. - After the InfoScale cluster is running and PVC is in bound state, increase the replica count of the respective applications to the previously deployed replicas.