Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Configuring storage
- Enabling disk UUID on virtual machines
- Installing Array Support Library (ASL) for VMDK on cluster nodes
- Excluding the boot disk from the Volume Manager configuration
- Creating the VMDK files
- Mapping the VMDKs to each virtual machine (VM)
- Enabling the multi-write flag
- Getting consistent names across nodes
- Creating a file system
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
- How DMP works
- Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
- Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
- Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
- Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
- Improving data protection, storage optimization, data migration, and database performance
- Use cases for InfoScale product components in a VMware guest
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- About SmartTier in the VMware environment
- About compression with InfoScale product components in the VMware guest
- About thin reclamation with InfoScale product components in the VMware guest
- About SmartMove with InfoScale product components in the VMware guest
- About SmartTier for Oracle with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About use cases for InfoScale Enterprise in the VMware guest
- Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
- Storage Foundation functionality and compatibility matrix
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
- Enable Password-less SSH
- Enabling TCP traffic to coordination point (CP) Server and management ports
- Configuring coordination point (CP) servers
- Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
- Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
- Configuring non-SCSI3 fencing
- Section IV. Reference
About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
This sample deployment illustrates how to install and configure Storage Foundation Cluster File System High Availability (SFCFSHA) in a VMware virtual server using VMware filesystem (VMFS) virtual disks (VMDKs) as the storage subsystem.
The information provided here is not a replacement or substitute for SFCFSHA documentation nor for the VMware documentation. This is a deployment illustration which complements the information found in other documents.
The following product versions and architecture are used in this example deployment:
RedHat Enterprise Linux (RHEL) Server 6.2
Storage Foundation Cluster File System High Availability 7.4.1
ESXi 5.1
A four node virtual machine cluster will be configured on two VMware ESXi servers. Shared storage between the two ESXi servers using Fibre Channel has been setup. The Cluster File System will exist across four virtual machines: cfs01, cfs02, cfs03, and cfs04. Three Coordination Point (CP) servers will be used: cps1, cps2, and cps3 (this one placed in a different ESXi server). For storage, five data stores will be used and one shared VMDK file will be placed in each data store.
Two private networks will be used for cluster heartbeat. They are called PRIV1 and PRIV2. Virtual switch vSwitch2 also has the VMkernel Port for vMotion enabled. vSwitch0 is used for management traffic and the public IP network.
Some blades have a two network limit. If this is the case, configure one network for heartbeats and the other one as a heartbeat backup (low priority setting).