Veritas InfoScale™ 7.4.2 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the Veritas InfoScale Products Virtualization Guide
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Virtualization use cases addressed by Veritas InfoScale products
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with Veritas InfoScale Solutions
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- Dynamic Multi-Pathing in the KVM host
- Storage Foundation in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- Storage Foundation Cluster File System High Availability in the KVM host
- Dynamic Multi-Pathing in the KVM host and guest virtual machine
- Dynamic Multi-Pathing in the KVM host and Storage Foundation HA in the KVM guest virtual machine
- Cluster Server in the KVM host
- Cluster Server in the guest
- Cluster Server in a cluster across virtual machine guests and physical machines
- Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing a RedHat Enterprise Virtualization environment
- Getting started with Red Hat Enterprise Virtualization (RHEV)
- Creating and launching a RHEV host
- Setting up a virtual machine in the Red Hat Enterprise Virtualization (RHEV) environment
- Veritas InfoScale Solutions configuration options for the RHEV environment
- Dynamic Multi-Pathing in a RHEV guest virtual machine
- Dynamic Multi-Pathing in the RHEV host
- Storage Foundation in the RHEV guest virtual machine
- Storage Foundation Cluster File System High Availability in the RHEV host
- Dynamic Multi-Pathing in the RHEV host and guest virtual machine
- Dynamic Multi-Pathing in the RHEV host and Storage Foundation HA in the RHEV guest virtual machine
- Cluster Server for the RHEV environment
- About setting up RHEV with Veritas InfoScale Solutions
- Installing Veritas InfoScale Solutions in the RHEV environment
- Configuring VCS to manage virtual machines
- Configuring Storage Foundation as backend storage for virtual machines
- About configuring virtual machines to attach Storage Foundation as backend storage in an RHEV environment
- Use cases for virtual machines using Storage Foundation storage
- Workflow to configure storage for virtual machines in an RHEV environment
- Prerequisites in an RHEV environment
- Installing the SF administration utility for RHEV
- Installing and configuring SFCFSHA or SFHA cluster on RHEL-H nodes
- Configuring Storage Foundation as backend storage for virtual machines
- Usage examples from the RHEV administration utility
- Mapping DMP meta-devices
- Resizing devices
- Getting started with Red Hat Enterprise Virtualization (RHEV)
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- About storage to application visibility using Veritas InfoScale Operations Manager
- About Kernel-based Virtual Machine (KVM) virtualization discovery in Veritas InfoScale Operations Manager
- About Red Hat Enterprise Virtualization (RHEV) virtualization discovery in Veritas InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Virtual machine discovery in Microsoft Hyper-V
- Storage mapping discovery in Microsoft Hyper-V
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server In a KVM Environment Architecture Summary
- VCS in host to provide the Virtual Machine high availability and ApplicationHA in guest to provide application high availability
- Virtual to Virtual clustering and failover
- I/O fencing support for Virtual to Virtual clustering
- Virtual to Physical clustering and failover
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- About disaster recovery for Red Hat Enterprise Virtualization virtual machines
- DR requirements in an RHEV environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Configure Storage Foundation components as backend storage
- Configure VVR and VFR in VCS GCO option for replication between DR sites
- Configuring Red Hat Enterprise Virtualization (RHEV) virtual machines for disaster recovery using Cluster Server (VCS)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise product
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Application visibility and device discovery
- Section V. Reference
- Appendix A. Troubleshooting
- Troubleshooting virtual machine live migration
- Live migration storage connectivity in a Red Hat Enterprise Virtualization (RHEV) environment
- Troubleshooting Red Hat Enterprise Virtualization (RHEV) virtual machine disaster recovery (DR)
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Virtual machine start fails due to having the wrong boot order in RHEV environments
- Virtual machine hangs in the wait_for_launch state and fails to start in RHEV environments
- VCS fails to start a virtual machine on a host in another RHEV cluster if the DROpts attribute is not set
- Virtual machine fails to detect attached network cards in RHEV environments
- The KVMGuest agent behavior is undefined if any key of the RHEVMInfo attribute is updated using the -add or -delete options of the hares -modify command
- RHEV environment: If a node on which the VM is running panics or is forcefully shutdown, VCS is unable to start the VM on another node
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Flexible Storage Sharing use cases
The following list includes several use cases for which you would want to use the FSS feature:
Use of local storage in current use cases | The FSS feature supports all current use cases of the Storage Foundation and High Availability Solutions (Storage Foundation and High Availability Solutions) stack without requiring SAN-based storage. |
Off-host processing | Data Migration:
Back-up/Snapshots: An additional node can take a back-up by joining the cluster and reading from volumes/snapshots that are hosted on the DAS/shared storage, which is connected to one or more nodes of the cluster, but not the host taking the back-up. |
DAS SSD benefits leveraged with existing Storage Foundation and High Availability Solutions features |
|
FSS with SmartIO for file system caching | If the nodes in the cluster have internal SSDs as well as HDDs, the HDDs can be shared over the network using FSS. You can use SmartIO to set up a read/write-back cache using the SSDs. The read cache can service volumes created using the network-shared HDDs. |
FSS with SmartIO for remote caching | FSS works with SmartIO to provide caching services for nodes that do not have local SSD devices. In this scenario, Flexible Storage Sharing (FSS) exports SSDs from nodes that have a local SSD. FSS then creates a pool of the exported SSDs in the cluster. From this shared pool, a cache area is created for each node in the cluster. Each cache area is accessible only to that particular node for which it is created. The cache area can be of type, VxVM or VxFS. The cluster must be a CVM cluster. The volume layout of the cache area on remote SSDs follows the simple stripe layout, not the default FSS allocation policy of mirroring across host. If the caching operation degrades performance on a particular volume, then caching is disabled for that particular volume. The volumes that are used to create cache areas must be created on disk groups with disk group version 200 or later. However, data volumes that are created on disk groups with disk group version 190 or later can access the cache area created on FSS exported devices. Note: CFS write-back caching is not supported for cache areas created on remote SSDs. For more information, see the document Veritas InfoScale SmartIO for Solid State Drives Solutions Guide. |
Campus cluster configuration | Campus clusters can be set up without the need for Fibre Channel (FC) SAN connectivity between sites. |
FSS in cloud environments | The Flexible Shared Storage (FSS) Technology allows you to overcome the limitations of 'Share-Nothing' storage in cloud environments. FSS enables you to create shared-nothing clusters by sharing cloud block storage over the network. For details, see the Veritas InfoScale Solutions in Cloud Environments document. |