InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Troubleshooting virtual machine live migration
A VCS cluster is formed between virtual machines (VMs) and one of the virtual machines is migrated from one host to another host. During a virtual machine migration, if the VM takes more than 16 seconds to migrate to the target node, one of the VMs panics. In this case, 16 seconds is the default value of the LLT peerinact parameter. You can increase the peerinact value to allow sufficient time for the VM to migrate. You can adjust this time based on the environment in which you initiate the VM migration.
To avoid false failovers for virtual machine migration, you can change the peerinact value using the following methods:
Set the peerinact value dynamically using lltconfig command:
# lltconfig -T peerinact:value
Set the peerinact value in the
/etc/llttabfile to make the value persistent across reboots.
To set the peerinact value dynamically using lltconfig command
- Determine how long the migrating node is unresponsive in your environment.
- If that time is less than the default LLT peer inactive timeout of 16 seconds, VCS operates normally.
If not, increase the peer inactive timeout to an appropriate value on all the nodes in the cluster before beginning the migration.
For example, to set the LLT peerinact timeout to 20 seconds, use the following command:
# lltconfig -T peerinact:2000
The value of the peerinact command is in .01 seconds.
- Verify that peerinact has been set to 20 seconds:
# lltconfig -T query
Current LLT timer values (.01 sec units): heartbeat = 50 heartbeatlo = 100 peertrouble = 200 peerinact = 2000 oos = 10 retrans = 10 service = 100 arp = 30000 arpreq = 3000 Current LLT flow control values (in packets): lowwater = 40
- Repeat steps 2 to 3 on other cluster nodes.
- Reset the value back to the default peerinact value using the lltconfig command after the migration is complete.
To make the LLT peerinact value persistent across reboots:
- Append the following line at the end of
/etc/llttabfile to set the LT peerinact value to 20 seconds:set-timer peerinact:2000
After appending the above line,
/etc/llttabfile should appear similar to the following:# cat /etc/llttab set-node sys1 set-cluster 1234 link eth2 eth-00:15:17:48:b5:80 - ether - - link eth3 eth-00:15:17:48:b5:81 - ether - - set-timer peerinact:2000
For more information on VCS commands, see the Cluster Server Administrator's Guide.
For attributes related to migration, see the Cluster Server Bundled Agents Reference Guide.