InfoScale™ 9.0 Virtualization Guide - AIX
- Section I. Overview
- Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Overview of the InfoScale Virtualization Guide
- About the AIX PowerVM virtualization technology
- About InfoScale products support for the AIX PowerVM environment
- About IBM LPARs with N_Port ID Virtualization (NPIV)
- About Veritas Extension for Oracle Disk Manager
- Virtualization use cases addressed by InfoScale
- Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Section II. Implementation
- Setting up Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Supported configurations for Virtual I/O servers (VIOS) on AIX
- Dynamic Multi-Pathing in the logical partition (LPAR)
- Dynamic Multi-Pathing in the Virtual I/O server (VIOS)
- InfoScale products in the logical partition (LPAR)
- Storage Foundation Cluster File System High Availability in the logical partition (LPAR)
- Dynamic Multi-Pathing in the Virtual I/O server (VIOS) and logical partition (LPAR)
- Dynamic Multi-Pathing in the Virtual I/O server (VIOS) and InfoScale products in the logical partition (LPAR)
- Cluster Server in the logical partition (LPAR)
- Cluster Server in the management LPAR
- Cluster Server in a cluster across logical partitions (LPARs) and physical machines
- Support for N_Port ID Virtualization (NPIV) in IBM Virtual I/O Server (VIOS) environments
- About setting up logical partitions (LPARs) with InfoScale products
- Configuring IBM PowerVM LPAR guest for disaster recovery
- Installing and configuring Storage Foundation and High Availability (SFHA) Solutions in the logical partition (LPAR)
- Installing and configuring storage solutions in the Virtual I/O server (VIOS)
- Installing and configuring Cluster Server for logical partition and application availability
- Enabling Veritas Extension for ODM file access from WPAR with VxFS
- Supported configurations for Virtual I/O servers (VIOS) on AIX
- Setting up Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Section III. Use cases for AIX PowerVM virtual environments
- Application to spindle visibility
- Simplified storage management in VIOS
- About simplified management
- About Dynamic Multi-Pathing in a Virtual I/O server
- About the Volume Manager (VxVM) component in a Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) on Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) pseudo devices as virtual SCSI devices
- Extended attributes in VIO client for a virtual SCSI disk
- Virtual IO client adapter settings for Dynamic Multi-Pathing (DMP) in dual-VIOS configurations
- Using DMP to provide multi-pathing for the root volume group (rootvg)
- Boot device management on NPIV presented devices
- Virtual machine (logical partition) availability
- Simplified management and high availability for IBM Workload Partitions
- About IBM Workload Partitions
- About using IBM Workload Partitions (WPARs) with InfoScale products
- Implementing InfoScale support for WPARs
- How Cluster Server (VCS) works with Workload Patitions (WPARs)
- Configuring VCS in WPARs
- Configuring AIX WPARs for disaster recovery using VCS
- High availability and live migration
- About Live Partition Mobility (LPM)
- About the partition migration process and simplified management
- About Storage Foundation and High Availability (SFHA) Solutions support for Live Partition Mobility
- Providing high availability with live migration in a Cluster Server environment
- Providing logical partition (LPAR) failover with live migration
- Limitations and unsupported LPAR features
- Multi-tier business service support
- Server consolidation
- About IBM LPARs with virtual SCSI devices
- Using Storage Foundation in the logical partition (LPAR) with virtual SCSI devices
- Using Storage Foundation with virtual SCSI devices
- Setting up DMP for vSCSI devices in the logical partition (LPAR)
- About disabling DMP for vSCSI devices in the logical partition (LPAR)
- Preparing to install or upgrade Storage Foundation with DMP disabled for vSCSI devices in the logical partition (LPAR)
- Disabling DMP multi-pathing for vSCSI devices in the logical partition (LPAR) after installation or upgrade
- Adding and removing DMP support for vSCSI devices for an array
- How DMP handles I/O for vSCSI devices
- Using VCS with virtual SCSI devices
- About server consolidation
- About IBM Virtual Ethernet
- Physical to virtual migration (P2V)
- Section IV. Reference
Providing logical partition (LPAR) failover with live migration
This section describes how to create a profile file and use the ProfileFile attribute to automate LPAR profile creation on failback after migration.
For more information on manage the LPAR profile on source system after migration:
See Live partition mobility of managed LPARs.
Live migration of a managed LPAR deletes the LPAR profile and mappings of adapters from VIO servers from the source physical server. Without the LPAR configuration and VIOS adapter mappings of the physical server the LPAR cannot be brought online or failed over to the Cluster Server (VCS) node from which it has been migrated.
If an LPAR is to be made highly available, the LPAR profile file must be created using the steps provided below on all the VCS nodes on which the LPAR is to be made highly available. The VIO server names for an LPAR are different for each physical server and the adapter ids for the LPAR on each of the physical servers also might be different, therefore the profile file must be created for each of the VCS nodes separately.
When bringing an LPAR online on another node, VCS performs the following actions:
Checks if the LPAR configuration exists on that node.
If the LPAR configuration does not exist and if the ProfileFile attribute is specified, VCS tries to create an LPAR configuration and VIOS mappings as specified in the file specified by ProfileFile.
If creation of the LPAR configuration and VIOS mappings is successful, VCS brings LPAR online.
If the ProfileFile attribute is not configured and if the LPAR configuration does not exist on the physical server, the LPAR resource cannot be brought online.
The ProfileFile attribute is used to specify path of LPAR profile file.If the ProfileFile attribute for a VCS node is configured and the RemoveProfileOnOffline attribute is set to 1, VCS performs the following on offline or clean:
Deletes the LPAR configuration from the physical server.
Deletes the adapter mappings from the VIO servers.
For more information on attributes RemoveProfileOnOffline and ProfileFile, see the Cluster Server Bundled Agent Reference Guide.
To create the profile file for an LPAR
- Run the following command on HMC:
$ lssyscfg -r lpar -m physical-server-name --filter \ lpar_names=managed-lpar-name
- From the output of above command, select the following fields in key-value pairs:
name,lpar_id,lpar_env,work_group_id,shared_proc_pool_util_auth,\ allow_perf_collection,power_ctrl_lpar_ids,boot_mode,auto_start,\ redundant_err_path_reporting,time_ref,lpar_avail_priority,\ suspend_capable,remote_restart_capable,affinity_group_id --header
Delete the remaining attributes and their values.
- The remaining attributes are obtained from any profile associated with the managed LPAR.The name of the profile which you want to create is managed-lpar-profile-name.
Run the following command on HMC to get the remaining attribute.
$ lssyscfg -r prof -m physical-server-name --filter \ lpar_names=managed-lpar-name,profile_names=managed-lpar-profile-name
From the output of above command, select the following fields in key-value pairs:
name,all_resources,min_mem,desired_mem,max_mem,mem_mode,\ mem_expansion,hpt_ratio,proc_mode,min_procs,desired_procs,max_procs,\ sharing_mode,io_slots,lpar_io_pool_ids,max_virtual_slots,\ virtual_serial_adapters,virtual_scsi_adapters,virtual_eth_adapters,\ vtpm_adapters,virtual_fc_adapters,hca_adapters,conn_monitoring,\ auto_start,power_ctrl_lpar_ids,work_group_id,bsr_arrays,\ lhea_logical_ports,lhea_capabilities,lpar_proc_compat_mode
- Rename the name attribute in the above output to profile_name.
- Concatenate outputs from 1 and 3 with a comma and write it in a single line to a text file. This is the configuration file required for VCS to create or delete LPAR configuration. The absolute path of this file should be given in ProfileFile attribute.
Note:
If an error occurs while creating a partition from the LPAR profile file, make sure that all the missing attributes are populated in the profile data file. For more information on the error, see the LPAR_A.log file.
Following example procedure illustrates the profile file generation for lpar05 which is running on physical_server_01. The LPAR resource which monitors lpar05 LPAR is lpar05_resource. The VCS node that manages the lpar05_resource on physical server physical_server_01 is lpar101 and on physical_server_02 is lpar201.
To generate a file profile for lpar05 on on physical_server_01
- To get the LPAR details from the HMC, enter:
$ lssyscfg -r lpar -m physical_server_01 --filter \ lpar_names=lpar05
The output of this command is the following:
name=lpar05,lpar_id=15,lpar_env=aixlinux,state=Running,\ resource_config=1,os_version=AIX 7.1 7100-00-00-0000,\ logical_serial_num=06C3A0PF,default_profile=lpar05,\ curr_profile=lpar05,work_group_id=none,\ shared_proc_pool_util_auth=0,allow_perf_collection=0,\ power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,\ auto_start=0,redundant_err_path_reporting=0,\ rmc_state=inactive,rmc_ipaddr=10.207.111.93,time_ref=0,\ lpar_avail_priority=127,desired_lpar_proc_compat_mode=default,\ curr_lpar_proc_compat_mode=POWER7,suspend_capable=0,\ remote_restart_capable=0,affinity_group_id=none
- Select the output fields as explained in the procedure above.
See “To create the profile file for an LPAR”.
The key value pairs are the following:
name=lpar05,lpar_id=15,lpar_env=aixlinux,work_group_id=none,\ shared_proc_pool_util_auth=0,allow_perf_collection=0,\ power_ctrl_lpar_ids=none,boot_mode=norm,auto_start=0,\ redundant_err_path_reporting=0,time_ref=0,lpar_avail_priority=127,\ suspend_capable=0,remote_restart_capable=0
- To get the profile details from the HMC, enter:
$ lssyscfg -r lpar -m physical_server_01 --filter \ lpar_names=lpar05,profile_names=lpar05
The output of this command is the following:
name=lpar05,lpar_name=lpar05,lpar_id=15,lpar_env=aixlinux,\ all_resources=0,min_mem=512,desired_mem=2048,max_mem=4096,\ min_num_huge_pages=null,desired_num_huge_pages=null,\ max_num_huge_pages=null,mem_mode=ded,mem_expansion=0.0,\ hpt_ratio=1:64,proc_mode=ded,min_procs=1,desired_procs=1,\ max_procs=1,sharing_mode=share_idle_procs,\ affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,\ max_virtual_slots=1000,\ "virtual_serial_adapters=0/server/1/any//any/1,\ 1/server/1/any//any/1",\ "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\ 404/client/3/vio_server2/6/1",\ "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\ 11/0/97//0/0/ETHERNET0//all/none,\ 12/0/98//0/0/ETHERNET0//all/none",vtpm_adapters=none,\ "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\ c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\ c050760431670013/1""",hca_adapters=none,boot_mode=norm,\ conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,\ work_group_id=none,redundant_err_path_reporting=null,bsr_arrays=0,\ lhea_logical_ports=none,lhea_capabilities=none,\ lpar_proc_compat_mode=default,electronic_err_reporting=null
- After selection of the fields and renaming name to profile_name, the output is as follows:
profile_name=lpar05,all_resources=0,min_mem=512,desired_mem=2048,\ max_mem=4096,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,\ proc_mode=ded,min_procs=1,desired_procs=1,max_procs=1,\ sharing_mode=share_idle_procs,affinity_group_id=none,io_slots=none,\ lpar_io_pool_ids=none,max_virtual_slots=1000,\ "virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",\ "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\ 404/client/3/vio_server2/6/1",\ "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\ 11/0/97//0/0/ETHERNET0//all/none,\ 12/0/98//0/0/ETHERNET0//all/none",vtpm_adapters=none,\ "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\ c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\ c050760431670013/1""",hca_adapters=none,\ boot_mode=norm,conn_monitoring=1,auto_start=0,\ power_ctrl_lpar_ids=none,work_group_id=none,bsr_arrays=0,\ lhea_logical_ports=none,lhea_capabilities=none,\ lpar_proc_compat_mode=default
- Concatenate these two outputs with comma, which is as follows:
name=lpar05,lpar_id=15,lpar_env=aixlinux,work_group_id=none,\ shared_proc_pool_util_auth=0,allow_perf_collection=0,\ power_ctrl_lpar_ids=none,boot_mode=norm,auto_start=0,\ redundant_err_path_reporting=0,time_ref=0,lpar_avail_priority=127,\ suspend_capable=0,remote_restart_capable=0,profile_name=lpar05,\ all_resources=0,min_mem=512,desired_mem=2048,max_mem=4096,\ mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,proc_mode=ded,\ min_procs=1,desired_procs=1,max_procs=1,sharing_mode=share_idle_procs,\ affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,\ max_virtual_slots=1000,"virtual_serial_adapters=0/server/1/any//any/1,\ 1/server/1/any//any/1",\ "virtual_scsi_adapters=304/client/2/vio_server1/4/1,\ 404/client/3/vio_server2/6/1",\ "virtual_eth_adapters=10/0/1//0/0/ETHERNET0//all/none,\ 11/0/97//0/0/ETHERNET0//all/none,12/0/98//0/0/ETHERNET0//all/none",\ vtpm_adapters=none,\ "virtual_fc_adapters=""504/client/2/vio_server1/8/c050760431670010,\ c050760431670011/1"",""604/client/3/vio_server2/5/c050760431670012,\ c050760431670013/1""",hca_adapters=none,boot_mode=norm,\ conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,\ work_group_id=none,bsr_arrays=0,lhea_logical_ports=none,\ lhea_capabilities=none,lpar_proc_compat_mode=default
- Write this output to a text file. Assuming that the absolute location of profile file thus generated on lpar101 is /configfile/lpar05_on_physical_server_01.cfg, execute the following commands to configure the profile file in VCS.
$ hares -local lpar05_res ProfileFile $ hares -modify lpar05_res ProfileFile \ /configfile/lpar05_on_physical_server_01 -sys lpar101
- Repeat steps 1-6 to create the profie file for lpar05 for physical_server02.