Arctera InfoScale™ for Kubernetes 8.0.400 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Introduction
- Prerequisites
- Additional prerequisites for Azure Red Hat OpenShift (ARO)
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Removing and adding back nodes to an Azure Red Hat OpenShift (ARO) cluster
- Installing Arctera InfoScale on Kubernetes
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
- Downloading Installer
- Tagging the InfoScale images on Kubernetes
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on Kubernetes
- Undeploying and uninstalling InfoScale
- Installing Arctera InfoScale on RKE2
- Introduction
- Prerequisites
- Installing Node Feature Discovery (NFD) Operator and Cert-Manager on RKE2 cluster
- Downloading Installer
- Tagging the InfoScale images on RKE2
- Applying licenses
- Considerations for configuring cluster or adding nodes to an existing cluster
- Creating multiple InfoScale clusters
- Installing InfoScale on RKE2
- Undeploying and uninstalling InfoScale
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- CSI plugin deployment
- Raw block volume support
- Static provisioning
- Dynamic provisioning
- Resizing Persistent Volumes (CSI volume expansion)
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Using InfoScale with non-root containers
- Using InfoScale in SELinux environments
- CSI Drivers
- Creating CSI Objects for OpenShift
- Creating ephemeral volumes
- Creating node affine volumes
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Configuring Arctera Oracle Data Manager (VRTSodm)
Arctera Oracle Data Manager (VRTSodm) is offered as a part of InfoScale suite. With VRTSodm, Oracle Applications bypass caching and locks of the file system thus enabling a faster connection.
VRTSodm is enabled by the linking libodm.so with the Oracle Application. The I/O calls from Oracle Application are then routed through the ODM kernel module.
Following changes are needed to the Oracle database yaml file to enable it to run with Arctera ODM:
Update the VxFS Data Volume (<vxfs pvc>) in the following code and add it to the
.yaml.Note:
Oracle Container image requires the data volume to be mounted at
/opt/oracle/oradata. This volume also needs to be writable by the 'oracle' (uid: 54321) user inside the container. VxFS data volume must be mounted at this path by using a PVC. To handle this permissions issue, the followinginitContainercan be used.initContainers: - name: fix-volume-permission image: ubuntu command: - sh - -c - mkdir -p /opt/oracle/oradata && chown -R 54321:54321 /opt/oracle/oradata && chmod 0700 /opt/oracle/oradata volumeMounts: - name: <vxfs pvc> mountPath: /opt/oracle/oradata readOnly: falseAdd the following to your
.yamlto disable DNFS.args: - sh - -c - cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ && make -f ins_rdbms.mk dnfs_off && cd $WORKDIR && $ORACLE_BASE/$RUN_FILE
Create a hostpath volume
devodmin the.yaml, and mount at/dev/odm.Note:
On selinux-enabled systems (including OpenShift), the Oracle database container must be run as privileged.
Use the
libodm.sothat Arctera provides. Run the following commands on the bastion/master nodes.oc/kubectl cp <infoscalepod>:/opt/VRTSodm/lib64/libodm.so.
oc/kubectl create configmap libodm --from-file libodm.so.
Mount
libodm.soinside the oracle container as under:- name: libodm-cmapvol mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so subPath: libodm.so volumes: - name: libodm-cmapvol configMap: name: libodm items: - key: libodm.so path: libodm.so
Run your .yaml on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster.
Alternatively, copy the following content and create a new file oracle-odm.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracle-odm
labels:
app: oracledb
spec:
replicas: 1
selector:
matchLabels:
app: oracledb
template:
metadata:
labels:
app: oracledb
spec:
initContainers:
- name: fix-volume-permission
image: ubuntu
command:
- sh
- -c
- mkdir -p /opt/oracle/oradata && chown -R 54321:54321
/opt/oracle/oradata && chmod 0700 /opt/oracle/oradata
volumeMounts:
- name: oracle-datavol
mountPath: /opt/oracle/oradata
readOnly: false
containers:
- name: oracle-app
securityContext:
privileged: true
image:#replace this with the link for patched oracle container image
imagePullPolicy: IfNotPresent
# Modification to the args to disable dnfs before starting database
args:
- sh
- -c
- cd /opt/oracle/product/19c/dbhome_1/rdbms/lib/ && make -f
ins_rdbms.mk dnfs_off && cd $WORKDIR && $ORACLE_BASE/$RUN_FILE
resources:
requests:
memory: 8Gi
env:
- name: ORACLE_SID
value: "orainst1"
- name: ORACLE_PDB
value: orapdb1
- name: ORACLE_PWD
value: oracle
ports:
- name: listener
containerPort: 1521
hostPort: 1521
volumeMounts:
- name: oracle-datavol
mountPath: /opt/oracle/oradata
readOnly: false
- name: devodm
mountPath: /dev/odm
- name: libodm-cmapvol
mountPath: /opt/oracle/product/19c/dbhome_1/rdbms/lib/odm/libodm.so
subPath: libodm.so
volumes:
- name: oracle-datavol
persistentVolumeClaim:
claimName: oracle-data-pvc
- name: devodm
hostPath:
path: /dev/odm
type: Directory
- name: libodm-cmapvol
configMap:
name: libodm
items:
- key: libodm.so
path: libodm.so
---
apiVersion: v1
kind: Service
metadata:
name: ora-listener
namespace: default
labels:
app: oracledb
spec:
selector:
app: oracledb
type: NodePort
ports:
- name: ora-listener
protocol: TCP
port: 1521
targetPort: 1521
Save the file.
Run the file on the bastion mode of the OpenShift cluster or the master node of the Kubernetes cluster to enable a faster connection.