InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- InfoScale solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Section V. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Configuring iSCSI for OpenShift VMs
For InfoScale deployments in OpenShift environments, iSCSI is the only supported storage protocol for accessing external storage. This section provides instructions for configuring iSCSI initiators directly within the virtual machines (VMs) running on OpenShift Virtualization.
Before configuring the iSCSI initiator within your VMs, make sure that the following requirements are met:
Static IP addresses are configured for all VMs that will use iSCSI connections
The underlying network infrastructure supports the iSCSI traffic requirements
The iSCSI target servers are accessible from the VM network
Network Attachment Definitions (NADs) have been created for dedicated iSCSI traffic
Node Network Configuration Policy (NNCP) has been properly configured for the underlying node network interfaces
Perform the following tasks on each VM that will connect to the iSCSI storage.
To install the required iSCSI initiator utilities
- For RHEL-based or CentOS-based VMs:
# yum install iscsi-initiator-utils device-mapper-multipath
For SLES-based VMs:
# zypper install open-iscsi multipath-tools
- Verify the installation:
# rpm -qa | grep -i iscsi
To set the Initiator Name
- Check the current initiator name:
# cat /etc/iscsi/<initiator_name>.iscsi
- If required, modify the initiator name to ensure uniqueness across all VMs:
# echo "InitiatorName=iqn.$(date +%Y-%m).com.<example>.infoscale:$(hostname)" > /etc/iscsi/<initiator_name>.iscsi
- Enable and start the iSCSI services:
# systemctl enable iscsid
# systemctl enable iscsi
# systemctl start iscsid
# systemctl start iscsi
(Optional) To configure iSCSI CHAP Authentication
- If your iSCSI target requires CHAP authentication, edit the iSCSI configuration file:
# vi /etc/iscsi/iscsid.conf
- Configure CHAP settings by uncommenting and modifying these lines:
node.session.auth.authmethod = CHAP node.session.auth.username = username node.session.auth.password = password
- Restart the iSCSI service:
# systemctl restart iscsid
To discover and connect to iSCSI targets
- Discover available targets on the iSCSI storage server:
# iscsiadm -m discovery -t sendtargets -p <target_IP_address>:3260
- Log in to the discovered target:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 -l
- Verify the connection:
# iscsiadm -m session -P 3
- Verify the newly available storage devices:
# ls -la /dev/disk/by-path/ip-*
Finally, make sure that the iSCSI connections persist across VM reboots.
To configure persistent iSCSI connections
- Enable automatic login for the discovered targets:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 --op update -n node.startup -v automatic
- Verify the configuration:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 --op show"
For details on configuring an iSCSI initiator, refer to the Red Hat documentation.
For details on configuring access to mass storage over IP networks, refer to the SUSE Linux Enterprise Server (SLES) documentation.