Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Virtualization Guide - Linux
Last Published:
2025-08-11
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the InfoScale Virtualization Guide
- About InfoScale support for Linux virtualization environments
- About KVM technology
- About InfoScale deployments in OpenShift Virtualization environments
- About InfoScale deployments in OpenStack environments
- Virtualization use cases addressed by InfoScale
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with InfoScale solutions
- InfoScale configuration options for a KVM environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- DMP in the KVM host
- SF in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- SFCFSHA in the KVM host
- DMP in the KVM host and guest virtual machine
- DMP in the KVM host and SFHA in the KVM guest virtual machine
- VCS in the KVM host
- VCS in the guest
- VCS in a cluster across virtual machine guests and physical machines
- Installing InfoScale in a KVM environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server in a KVM environment architecture summary
- Virtual-to-virtual clustering and failover
- I/O fencing support for virtual-to-virtual clustering
- Virtual-to-physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Section V. Reference
- Appendix A. Troubleshooting
- InfoScale logs for CFS configurations in OpenStack environments
- Troubleshooting virtual machine live migration
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Consistent naming across KVM Hosts
While enclosure based naming (EBN) provides persistent naming for a single node, it does not guarantee consistent naming across nodes in a cluster. The User Defined Names (UDN) feature of DMP allows DMP devices to be given both persistent and consistent names across multiple hosts. When using User Defined Names, a template file is created on a host, which maps the serial number of the enclosure and device to unique device name. User Defined Names can be manually selected, which can help make mappings easier to manage.
To create consistent naming across hosts
- Create the User Defined Names template file.
# /etc/vx/bin/vxgetdmpnames enclosure=3pardata0 > /tmp/user_defined_names # cat /tmp/user_defined_names enclosure vendor=3PARdat product=VV serial=1628 name=3pardata0 dmpnode serial=2AC00008065C name=3pardata0_1 dmpnode serial=2AC00002065C name=3pardata0_2 dmpnode serial=2AC00003065C name=3pardata0_3 dmpnode serial=2AC00004065C name=3pardata0_4
- If necessary, rename the devices. In this example, the DMP devices are named using the name of the guest they are to be mapped to.
# cat /dmp/user_defined_names enclosure vendor=3PARdat product=VV serial=1628 name=3pardata0 dmpnode serial=2AC00008065C name=guest1_1 dmpnode serial=2AC00002065C name=guest1_2 dmpnode serial=2AC00003065C name=guest2_1 dmpnode serial=2AC00004065C name=guest2_2
- Apply the User Defined Names file to this node, and all other hosts.
# vxddladm assign names file=/tmp/user_defined_names
- Verify the user defined names have been applied.
# vxdmpadm getdmpnode enclosure=3pardata0 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ============================================================================== guest_1_1 ENABLED 3PARDATA 2 2 0 3pardata0 guest_1_2 ENABLED 3PARDATA 2 2 0 3pardata0 guest_2_1 ENABLED 3PARDATA 2 2 0 3pardata0 guest_2_2 ENABLED 3PARDATA 2 2 0 3pardata0