InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Host network configuration
In RHEL 7 and earlier, the libvirtd service creates a default bridge virbr0 which is a NAT'ed private network.
In RHEL 8 and later, this bridge is still created by libvirt, but it is managed through the modular daemons - typically, virtqemud (for QEMU / KVM) - instead of the monolithic libvirtd process.
Private IPs from the network 192.168.122.0 are allocated to the guests using virbr0 for networking. If the guests are required to communicate on the public network of the host machines, then a bridge must be configured.
Perform the following steps to create the bridge:
Create a new interface file with the name
ifcfg-br0in/etc/sysconfig/network-scripts/location where all the other interface configuration files are present. Its contents are as follows:DEVICE=br0 Type=Bridge BOOTPROTO=dhcp ONBOOT=yes
Add the physical interface to the bridge using the following command.
# brctl addif eth0 br0
This adds the physical interface that the guests shares with the br0 bridge created in the previous step.
Verify that your eth0 was added to the br0 bridge using the brctl show command.
# brctl show
The output must look similar to the following:
bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes br0 8000.0019b97ec863 yes eth0
The eth0 network configuration must be changed. The ifcfg-eth0 script is already present.
Edit the file and add a line BRIDGE=br0, so that the contents of the configuration file look like the following example:
DEVICE=eth0 BRIDGE=br0 BOOTPROTO=none HWADDR=00:19:b9:7e:c8:63 ONBOOT=yes TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes NM_CONTROLLED=no
Restart the network services to bring all the network configuration changes into effect.