Please enter search query.
Search <book_title>...
Veritas InfoScale™ Virtualization Guide - Linux on ESXi
Last Published:
2019-02-26
Product(s):
InfoScale & Storage Foundation (7.4)
Platform: VMware ESX
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Configuring storage
- Enabling disk UUID on virtual machines
- Installing Array Support Library (ASL) for VMDK on cluster nodes
- Excluding the boot disk from the Volume Manager configuration
- Creating the VMDK files
- Mapping the VMDKs to each virtual machine (VM)
- Enabling the multi-write flag
- Getting consistent names across nodes
- Creating a file system
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
- How DMP works
- Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
- Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
- Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
- Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
- Improving data protection, storage optimization, data migration, and database performance
- Use cases for Veritas InfoScale product components in a VMware guest
- Protecting data with Veritas InfoScale product components in the VMware guest
- Optimizing storage with Veritas InfoScale product components in the VMware guest
- About SmartTier in the VMware environment
- About compression with Veritas InfoScale product components in the VMware guest
- About thin reclamation with Veritas InfoScale product components in the VMware guest
- About SmartMove with Veritas InfoScale product components in the VMware guest
- About SmartTier for Oracle with Veritas InfoScale product components in the VMware guest
- Migrating data with Veritas InfoScale product components in the VMware guest
- Improving database performance with Veritas InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About use cases for InfoScale Enterprise in the VMware guest
- Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
- Storage Foundation functionality and compatibility matrix
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
- Enable Password-less SSH
- Enabling TCP traffic to coordination point (CP) Server and management ports
- Configuring coordination point (CP) servers
- Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
- Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
- Configuring non-SCSI3 fencing
- Section IV. Reference
Configuring a Coordination Point server service group
Even in a single node cluster, a virtual IP address (VIP) is used. This enables the creation of a VCS resource to control the VIP availability. For the example configuration, a VIP for each CP server is assigned to illustrate the process.
To configure CP server service group
- Verify that you have a VIP available for each of your CP servers.
- Run the command:
# /opt/VRTS/install/installvcs<version> -configcps
Where <version> is the specific release version
- When the installer prompts if you want to configure a CP server, select Configure Coordination Point Server on single node VCS system.
- The name of the CP server is the same as the host name plus "v" at the end. For the example configuration CP server it is cps1v.
- Enter the Virtual IP address that is associated to the single node cluster. In the example of node cps1, it is 10.182.99.124. Accept the default port suggested.
- As discussed before, security is enabled for the example and is recommended as a best practice.
- When prompted, enter the location of the database. In the example, the database will be installed locally, so you can accept the default location.
- After reviewing the configuration parameters, continue with the configuration of the CP server Service Group. The NIC used at cps1 is eth4. The example does not use NetworkHosts. Enter the netmask, and the configuration is complete.
The CPSSG Service Group is now online.
[root@cps1 rhel6_x86_64]# hastatus -sum -- SYSTEM STATE -- System State Frozen A cps1 Running 0 -- GROUP STATE -- Group System Probed AutoDisabled State B CPSSG cps1 Y N ONLINE [root@cps1 rhel6_x86_64]#