Veritas InfoScale™ 7.4.2 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the Veritas InfoScale Products Virtualization Guide
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Virtualization use cases addressed by Veritas InfoScale products
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with Veritas InfoScale Solutions
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- Dynamic Multi-Pathing in the KVM host
- Storage Foundation in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- Storage Foundation Cluster File System High Availability in the KVM host
- Dynamic Multi-Pathing in the KVM host and guest virtual machine
- Dynamic Multi-Pathing in the KVM host and Storage Foundation HA in the KVM guest virtual machine
- Cluster Server in the KVM host
- Cluster Server in the guest
- Cluster Server in a cluster across virtual machine guests and physical machines
- Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing a RedHat Enterprise Virtualization environment
- Getting started with Red Hat Enterprise Virtualization (RHEV)
- Creating and launching a RHEV host
- Setting up a virtual machine in the Red Hat Enterprise Virtualization (RHEV) environment
- Veritas InfoScale Solutions configuration options for the RHEV environment
- Dynamic Multi-Pathing in a RHEV guest virtual machine
- Dynamic Multi-Pathing in the RHEV host
- Storage Foundation in the RHEV guest virtual machine
- Storage Foundation Cluster File System High Availability in the RHEV host
- Dynamic Multi-Pathing in the RHEV host and guest virtual machine
- Dynamic Multi-Pathing in the RHEV host and Storage Foundation HA in the RHEV guest virtual machine
- Cluster Server for the RHEV environment
- About setting up RHEV with Veritas InfoScale Solutions
- Installing Veritas InfoScale Solutions in the RHEV environment
- Configuring VCS to manage virtual machines
- Configuring Storage Foundation as backend storage for virtual machines
- About configuring virtual machines to attach Storage Foundation as backend storage in an RHEV environment
- Use cases for virtual machines using Storage Foundation storage
- Workflow to configure storage for virtual machines in an RHEV environment
- Prerequisites in an RHEV environment
- Installing the SF administration utility for RHEV
- Installing and configuring SFCFSHA or SFHA cluster on RHEL-H nodes
- Configuring Storage Foundation as backend storage for virtual machines
- Usage examples from the RHEV administration utility
- Mapping DMP meta-devices
- Resizing devices
- Getting started with Red Hat Enterprise Virtualization (RHEV)
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- About storage to application visibility using Veritas InfoScale Operations Manager
- About Kernel-based Virtual Machine (KVM) virtualization discovery in Veritas InfoScale Operations Manager
- About Red Hat Enterprise Virtualization (RHEV) virtualization discovery in Veritas InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Virtual machine discovery in Microsoft Hyper-V
- Storage mapping discovery in Microsoft Hyper-V
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server In a KVM Environment Architecture Summary
- VCS in host to provide the Virtual Machine high availability and ApplicationHA in guest to provide application high availability
- Virtual to Virtual clustering and failover
- I/O fencing support for Virtual to Virtual clustering
- Virtual to Physical clustering and failover
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- About disaster recovery for Red Hat Enterprise Virtualization virtual machines
- DR requirements in an RHEV environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Configure Storage Foundation components as backend storage
- Configure VVR and VFR in VCS GCO option for replication between DR sites
- Configuring Red Hat Enterprise Virtualization (RHEV) virtual machines for disaster recovery using Cluster Server (VCS)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise product
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Application visibility and device discovery
- Section V. Reference
- Appendix A. Troubleshooting
- Troubleshooting virtual machine live migration
- Live migration storage connectivity in a Red Hat Enterprise Virtualization (RHEV) environment
- Troubleshooting Red Hat Enterprise Virtualization (RHEV) virtual machine disaster recovery (DR)
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Virtual machine start fails due to having the wrong boot order in RHEV environments
- Virtual machine hangs in the wait_for_launch state and fails to start in RHEV environments
- VCS fails to start a virtual machine on a host in another RHEV cluster if the DROpts attribute is not set
- Virtual machine fails to detect attached network cards in RHEV environments
- The KVMGuest agent behavior is undefined if any key of the RHEVMInfo attribute is updated using the -add or -delete options of the hares -modify command
- RHEV environment: If a node on which the VM is running panics or is forcefully shutdown, VCS is unable to start the VM on another node
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Mapping devices using the virtio-scsi interface
Devices can be mapped to the guest through the virtio-scsi interface, replacing the virtio-blk device and providing the following improvements:
The ability to connect to multiple storage devices
A standard command set
Standard device naming to simplify migrations
Device pass-through
Note:
Mapping using paths is also supported with the virtio-scsi interface.
To enable SCSI passthrough and use the exported disks as bare-metal SCSI devices inside the guest, the <disk> element's device attribute must be set to "lun" instead of "disk". The following disk XML file provides an example of the device attribute's value for virtio-scsi:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='4' bus='0' target='0' unit='0'/> </disk>
To map one or more devices using virtio-scsi
- Create one XML file for each SCSI controller, and enter the following content into the XML files:
<controller type='scsi' model='virtio-scsi' index='1'/>
The XML file in this example is named
ctlr.xml. - Attach the SCSI controllers to the guest:
# virsh attach-device guest1 ctlr.xml --config
- Create XML files for the disks, and enter the following content into the XML files:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk>
The XML file in this example is named
disk.xml. - Attach the disk to the existing guest:
# virsh attach-device guest1 disk.xml --config