InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Mapping devices using the virtio-scsi interface
Devices can be mapped to the guest through the virtio-scsi interface, replacing the virtio-blk device and providing the following improvements:
The ability to connect to multiple storage devices
A standard command set
Standard device naming to simplify migrations
Device pass-through
Note:
Mapping using paths is also supported with the virtio-scsi interface.
To enable SCSI passthrough and use the exported disks as bare-metal SCSI devices inside the guest, the <disk> element's device attribute must be set to "lun" instead of "disk". The following disk XML file provides an example of the device attribute's value for virtio-scsi:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='4' bus='0' target='0' unit='0'/> </disk>
To map one or more devices using virtio-scsi
- Create one XML file for each SCSI controller, and enter the following content into the XML files:
<controller type='scsi' model='virtio-scsi' index='1'/>
The XML file in this example is named
ctlr.xml. - Attach the SCSI controllers to the guest:
# virsh attach-device guest1 ctlr.xml --config
- Create XML files for the disks, and enter the following content into the XML files:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk>
The XML file in this example is named
disk.xml. - Attach the disk to the existing guest:
# virsh attach-device guest1 disk.xml --config