Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Configuring storage
- Enabling disk UUID on virtual machines
- Installing Array Support Library (ASL) for VMDK on cluster nodes
- Excluding the boot disk from the Volume Manager configuration
- Creating the VMDK files
- Mapping the VMDKs to each virtual machine (VM)
- Enabling the multi-write flag
- Getting consistent names across nodes
- Creating a file system
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
- How DMP works
- Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
- Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
- Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
- Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
- Improving data protection, storage optimization, data migration, and database performance
- Use cases for InfoScale product components in a VMware guest
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- About SmartTier in the VMware environment
- About compression with InfoScale product components in the VMware guest
- About thin reclamation with InfoScale product components in the VMware guest
- About SmartMove with InfoScale product components in the VMware guest
- About SmartTier for Oracle with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About use cases for InfoScale Enterprise in the VMware guest
- Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
- Storage Foundation functionality and compatibility matrix
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
- Enable Password-less SSH
- Enabling TCP traffic to coordination point (CP) Server and management ports
- Configuring coordination point (CP) servers
- Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
- Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
- Configuring non-SCSI3 fencing
- Section IV. Reference
Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
It is very common in VMware environments to have clusters of VMware systems that share common storage and configurations. It is not always easy to keep those configuration in sync across a number of servers, and this can place a burden on the administrators to ensure that the same settings are in place on all systems in the same cluster. DMP for VMware provides a mechanism to create and apply a template across systems to ensure that the same settings are in place. After an administrator configures a single host, the DataCenter View provides the ability to save those settings as a template, and then apply the common template to other hosts in the VMware DataCenter. This mechanism provides an easy and simplified process to make sure that all hosts are identically configured, with minimal administrator intervention.
With native multi-pathing, administrators must often manually configure each storage LUN to select a multi-pathing policy. Even then, the best policy available is round robin, which does little to work around I/O congestion during unusual SAN workload or contention for a shared storage path.
With DMP, as storage is discovered, it is automatically claimed by the appropriate DMP Array Support Library (ASL). The ASL is optimized to understand the storage characteristics and provide seamless visibility to storage attributes. You do not need to perform any additional management tasks to ensure that all paths to a storage device are used to their optimum level. By default, DMP uses the Minimum Queue algorithm, which ensures that I/Os are optimally routed to a LUN path that has available capacity. Unlike round robin policy, the minimum queue algorithm does not force I/O onto overloaded paths (which round robin would do, as it has no visibility to contention
As you configure or add additional storage, no additional administration is required beyond configuring the storage into a VMware datastore. If you made any optimizations on the storage enclosure, a new LUN automatically inherits the optimizations for that enclosure. You do not need to configure new storage LUNs to use any specific multi-pathing policy. DMP automatically leverages all paths available, with no need for administrator intervention.