Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional Prerequisites for Azure RedHat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing the Special Resource Operator
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
- Considerations for configuring cluster or adding nodes to an existing cluster
- Installing InfoScale on Kubernetes
- Installing InfoScale by using the plugin
- Undeploying and uninstalling InfoScale
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Enabling user access and other pod-related logs in Container environment
OpenShift and Kubernetes clusters have an in-built logging mechanism. You can configure the /etc/kubernetes/manifests/kube-apiserver.yaml to include the following types of logs.
Software upgrades and configuration file changes
System (Virtual or Physical) boot and halt
Process launches
Non-normal process exits
SELinux policy violations
System login attempts
Services starts and stops
Container starts and exits
You can thus log all events related to InfoScale pods, secrets and config maps.
Note:
You must enable EO_COMPLIANCE for your InfoScale deployment first. As a prerequisite, ensure that DNS is correctly configured or /etc/hosts is used to define IP address, full-qualified domain name(FQDN), and host name for the cluster node. Edit the licensing operator deployment and sds-operator deployment. Run the following commands oc/kubectl edit deployment -n infoscale-vtas <deployment_name> and oc/kubectl edit deployment -n infoscale-vtas infoscale-licensing-operator . Ensure that you update EO_COMPLIANCE here to enabled as under.
name: EO_COMPLIANCE
value: enabled
After you edit and save the sds-operator deployment, the InfoScale sds operator restarts automatically. You have to manually restart the other pods. Ensure that you restart one pod at a time. After a restarted pod is in a 'Ready' state, restart the next pod.
If DR is configured, run oc/kubectl edit deployment -n infoscale-vtas infoscale-dr-manager and enable EO_COMPLIANCE.
If you want to enable EO_COMPLIANCE on an OpenShift cluster by using OLM, run oc edit subscription infoscale-sds-operator -n infoscale-vtas and add the following to spec:
config:
env:
name: EO_COMPLIANCE
value: enabled
Also, run oc edit subscription infoscale-dr-manager -n infoscale-vtas and add the following to spec:
config:
env:
name: EO_COMPLIANCE
value: enabled
Bourne shell is deprecated. You can invoke Bash in sh mode, where command logging is not supported.
Note:
After you configure these files, API server must be restarted. Hence, the server experiences a downtime. Ensure that you inform about the downtime to the user community. If the files are not correctly configured, the API server might not restart. The configuration must be performed by a competent Storage Administrator.
Add the following code to /etc/kubernetes/manifests/kube-apiserver.yamlto log all user login attempts to the InfoScale pods. The user name, time, and whether the attempt is successful or not is logged.
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
rules
# Log pod/exec requests at RequestResponse level
- level: RequestResponse
namespaces: ["infoscale-vtas"]
resources:
- group: ""
resources: ["pods/exec"]
# Log everything else at Metadata level
- level: Metadata
omitStages:
- "RequestReceived"
Similarly, add the following code to log pods creation and deletion, config map changes, and secrets changes.
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
#Log the Metadata of Pod changes in given namespace
- level: Metadata
resources:
- group: ""
resources: ["pods"]
verbs: ["create", "patch", "update", "delete"]
namespaces: [""] #Fill namespace
#Please use the appropriate namespace where
# the infoscale pods are deployed in the namespace tag
#For eg: namespaces["infoscale-vtas"]
#Log the Request body of configmap changes in given namespace
- level: Request
resources:
- group: ""
resources: ["configmaps"]
namespaces: [""] #Fill namespace
#Log the Request of secrets changes in given namespace
- level: Request
resources:
- group: ""
resources: ["secrets"]
namespaces: [""] #Fill namespace
Login attempts to an OpenShift cluster get recorded in oauth-openshift- pod logs. The log level must be 'debug'. You can run oc edit authentications.operator.openshift.io to change the log level to 'debug'.
On an OpenShift cluster, pods creation and deletion gets logged in journalctl. Run journalctl no-pager on all the worker nodes for information about pods creation and deletion.