Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
Last Published:
2019-02-07
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: Linux,VMware ESX
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Configuring storage
- Enabling disk UUID on virtual machines
- Installing Array Support Library (ASL) for VMDK on cluster nodes
- Excluding the boot disk from the Volume Manager configuration
- Creating the VMDK files
- Mapping the VMDKs to each virtual machine (VM)
- Enabling the multi-write flag
- Getting consistent names across nodes
- Creating a file system
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
- How DMP works
- Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
- Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
- Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
- Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
- Improving data protection, storage optimization, data migration, and database performance
- Use cases for InfoScale product components in a VMware guest
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- About SmartTier in the VMware environment
- About compression with InfoScale product components in the VMware guest
- About thin reclamation with InfoScale product components in the VMware guest
- About SmartMove with InfoScale product components in the VMware guest
- About SmartTier for Oracle with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About use cases for InfoScale Enterprise in the VMware guest
- Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
- Storage Foundation functionality and compatibility matrix
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
- Enable Password-less SSH
- Enabling TCP traffic to coordination point (CP) Server and management ports
- Configuring coordination point (CP) servers
- Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
- Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
- Configuring non-SCSI3 fencing
- Section IV. Reference
Creating a file system
The next step will be to configure a common mount point across all the nodes, mounted on the same storage. In order to simplify the examples given here, a single disk group containing all the disks and a single volume will be created. Depending on the application requirements the number of disk groups and volumes may vary.
The boot disk has been excluded from Volume Manger configuration, so the 5 available disks (vmdk0_1, vmdk0_2, vmdk0_3, vmdk0_4 and vmdk0_5) will be the ones added to the disk group. These are the steps:
To create a clustered file system
- Initialize the disks:
- Create a new disk group and add the disks.
- Verify the configuration. Note the DISK and GROUP information.
- Create a striped volume with the 5 disks available.
- Create a File System.
- If you plan to configure a clustered file system environment, add the newly created file system to the cluster configuration. Given that this will be mounted by all the nodes at the same time, we will add it as a cluster resource, and commands cfsmntadm and cfsmount will be used.
- In case of a clustered file system, verify that the new directory is available in all the nodes by running the cfscluster status command or by verifying with df in each of the nodes.