InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
About InfoScale deployments in OpenShift Virtualization environments
InfoScale supports deployment within Kernel-based Virtual Machine (KVM) environments, which form the basis of its support for OpenShift Virtualization as well.
iSCSI requirement: When implementing InfoScale solutions in OpenShift Virtualization environments, iSCSI is the only supported storage protocol for accessing external storage.
Static IP mandate: For iSCSI connections to function reliably, the VMs must be configured with static IP addresses. Using dynamic IP addresses leads to connection disruptions if the addresses change during an operation, potentially causing data corruption or service outages.
The InfoScale Cluster Server (VCS) component relies on Low Latency Transport (LLT) for inter-node heartbeating and for communication within the cluster.
LLT network requirement: The stability of the VCS cluster is critically dependent on reliable, low-latency network links between the cluster nodes (VMs).
Static IP mandate: Network interfaces within the VMs dedicated to LLT traffic must be configured with static IP addresses. Using dynamic IPs for LLT links is not supported because it compromises cluster integrity.
Only Jumbo frames are supported for LLT communications in InfoScale in OpenShift Virtualization environments.
When running within OpenShift Virtualization VMs, the following special considerations apply:
OVS overhead: Within OpenShift Virtualization VMs, approximately 10 bytes are used by the Open vSwitch (OVS) infrastructure for each packet. Thus, when the underlying network is configured with the standard 1500 byte MTU, the effective MTU inside VMs is reduced to 1490 bytes.
MTU considerations: Due to the OVS overhead, enabling jumbo frames on the underlying physical network is essential for optimal LLT performance within VMs.
Jumbo frame configuration: If you implement jumbo frames, you must enable them at the following levels:
Physical switch infrastructure
Node network interfaces
OVS bridges
VM network interfaces
OpenShift Virtualization uses the following mechanisms to provide additional network interfaces to VMs and to facilitate static IP configurations:
Node Network Configuration Policy (NNCP): For both iSCSI and LLT networks, NNCP must be used to configure the underlying node network interfaces with appropriate settings, including static IP assignments at the node level.
Network Attachment Definition (NAD): Secondary network interfaces for VMs - like those for dedicated iSCSI or LLT traffic - must be provisioned by creating NADs that reference the configurations established by NNCP.
VM static IP configuration: After the NADs are attached to VM definitions, the static IP addresses must be configured within the VMs. For both iSCSI connections and LLT communications, these IPs must remain fixed throughout the VM lifecycle to maintain storage connectivity and cluster integrity.
When implementing this configuration, remember that both iSCSI and LLT networks require careful planning to ensure that IP addresses remain consistent. Any changes to these addresses can disrupt storage access or cluster communications, potentially leading to data unavailability or cluster split-brain scenarios.