InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Configuring iSCSI for OpenShift VMs
For InfoScale deployments in OpenShift environments, iSCSI is the only supported storage protocol for accessing external storage. This section provides instructions for configuring iSCSI initiators directly within the virtual machines (VMs) running on OpenShift Virtualization.
Before configuring the iSCSI initiator within your VMs, make sure that the following requirements are met:
Static IP addresses are configured for all VMs that will use iSCSI connections
The underlying network infrastructure supports the iSCSI traffic requirements
The iSCSI target servers are accessible from the VM network
Network Attachment Definitions (NADs) have been created for dedicated iSCSI traffic
Node Network Configuration Policy (NNCP) has been properly configured for the underlying node network interfaces
Perform the following tasks on each VM that will connect to the iSCSI storage.
To install the required iSCSI initiator utilities
- For RHEL-based or CentOS-based VMs:
# yum install iscsi-initiator-utils device-mapper-multipath
For SLES-based VMs:
# zypper install open-iscsi multipath-tools
- Verify the installation:
# rpm -qa | grep -i iscsi
To set the Initiator Name
- Check the current initiator name:
# cat /etc/iscsi/<initiator_name>.iscsi
- If required, modify the initiator name to ensure uniqueness across all VMs:
# echo "InitiatorName=iqn.$(date +%Y-%m).com.<example>.infoscale:$(hostname)" > /etc/iscsi/<initiator_name>.iscsi
- Enable and start the iSCSI services:
# systemctl enable iscsid
# systemctl enable iscsi
# systemctl start iscsid
# systemctl start iscsi
(Optional) To configure iSCSI CHAP Authentication
- If your iSCSI target requires CHAP authentication, edit the iSCSI configuration file:
# vi /etc/iscsi/iscsid.conf
- Configure CHAP settings by uncommenting and modifying these lines:
node.session.auth.authmethod = CHAP node.session.auth.username = username node.session.auth.password = password
- Restart the iSCSI service:
# systemctl restart iscsid
To discover and connect to iSCSI targets
- Discover available targets on the iSCSI storage server:
# iscsiadm -m discovery -t sendtargets -p <target_IP_address>:3260
- Log in to the discovered target:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 -l
- Verify the connection:
# iscsiadm -m session -P 3
- Verify the newly available storage devices:
# ls -la /dev/disk/by-path/ip-*
Finally, make sure that the iSCSI connections persist across VM reboots.
To configure persistent iSCSI connections
- Enable automatic login for the discovered targets:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 --op update -n node.startup -v automatic
- Verify the configuration:
# iscsiadm -m node -T <target_IQN> -p <target_IP_address>:3260 --op show"
For details on configuring an iSCSI initiator, refer to the Red Hat documentation.
For details on configuring access to mass storage over IP networks, refer to the SUSE Linux Enterprise Server (SLES) documentation.