Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Configuring storage
- Enabling disk UUID on virtual machines
- Installing Array Support Library (ASL) for VMDK on cluster nodes
- Excluding the boot disk from the Volume Manager configuration
- Creating the VMDK files
- Mapping the VMDKs to each virtual machine (VM)
- Enabling the multi-write flag
- Getting consistent names across nodes
- Creating a file system
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
- How DMP works
- Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
- Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
- Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
- Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
- Improving data protection, storage optimization, data migration, and database performance
- Use cases for InfoScale product components in a VMware guest
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- About SmartTier in the VMware environment
- About compression with InfoScale product components in the VMware guest
- About thin reclamation with InfoScale product components in the VMware guest
- About SmartMove with InfoScale product components in the VMware guest
- About SmartTier for Oracle with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About use cases for InfoScale Enterprise in the VMware guest
- Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
- Storage Foundation functionality and compatibility matrix
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
- Enable Password-less SSH
- Enabling TCP traffic to coordination point (CP) Server and management ports
- Configuring coordination point (CP) servers
- Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
- Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
- Configuring non-SCSI3 fencing
- Section IV. Reference
When to use Raw Device Mapping and Storage Foundation
Raw Device Mapping (RDM) enables a virtual machine to have direct access to the storage rather than going through VMFS. RDM is configured per physical storage device, i.e. a disk or LUN is assigned to one or more virtual machines. It is not possible to assign a part of a physical storage device to a virtual machine. Different types of storage (local SCSI disks, iSCSI disks, Fibre Channel disks) can be used with raw device mapping; Veritas Volume Manager supports all three types of disks.
Note:
The Storage Foundation products work well with the iSCSI disks mapped directly to the Virtual Machines.
VMware provides two different modes for raw device mapping:
Logical mode offers the same functionality and compatibility as a Virtual Disk with respect to VMware ESXi features.
Physical mode is the most similar method to storage access in a non-virtual environment. Only one SCSI command, REPORT_LUNS, is virtualized as it is required to enable vMotion and a few other features in VMware.
With Storage Foundation, physical mode is recommended as it enables maximum functionality of Veritas Volume Manager in a VMware environment.
The different modes affect the functionality and behavior of Storage Foundation. It is important to use the correct mode for the desired functionality. The benefit of each storage access method is dependent on the workload in the virtual machine. It is easy to get started with one way of deploying storage without considering the long-term implications because of the ease of use of the virtual environment.
For applications with little to no storage need, using raw device mapping is overkill and not recommended. Also, if your environment depends on VMware snapshots, using Raw Device Mapping in physical mode is not possible as it is not supported by VMware.
Raw Device Mapping is a great fit for:
Applications with large storage needs
Applications that need predictable and measurable performance
Multi-node clusters using disk quorums
Applications with storage that is currently managed by Storage Foundation but is moving into a virtual environment
Applications that require direct access to storage, such as storage management applications