This article is provided as formal documentation that MPIO interoperability for IBM branded arrays with DMP is supported for the Storage Foundation (SF) and Storage Foundation High Availability (SFHA) 5.1 SP1 releases, as well as 6.0.1 and subsequent releases in relation to specific key requirements being implemented.
In an effort to reduce the interoperability impact and prevent potential data corruption when using MPIO with SF 5.1 SP, Veritas strongly recommends that 5.1 SP1 RP4 or higher be deployed.
By default, MPIO is enabled on all disks (LUNs) presented to an AIX server, which prevents DMP or other third-party multi-pathing drivers (such as EMC PowerPath) from managing the paths to such devices. The Multiple Path I/O (MPIO) feature was introduced in AIX 5.2 to manage disks and LUNs with multiple paths.
IBM only tested interoperability of MPIO functionality with IBM branded storage and the Veritas product stack. Simple co-existence and basic functionality testing has been validated by Veritas in relation to MPIO with IBM branded storage arrays only.
All other non-branded IBM arrays are NOT qualified by Veritas in connection with MPIO as the multi-pathing driver with the SF and SFHA product stack.
The AIX MPIO infrastructure allows IBM or third-party storage vendors to supply array related ODM definitions, which have unique default values for the important disk attributes.
The disk attributes can be displayed by using the lsattr command, and can be changed with the chdev command.
To allow DMP or a third-party multi-pathing driver to manage the multi-pathing function instead of MPIO, you must install the suitable Object Data Manager (ODM) definitions for the devices on the host. Without these ODM definitions, MPIO consolidates the paths, resulting in DMP only seeing a single path to a given device.
Veritas strongly recommends using DMP for multipathing whenever possible because a system running the full Storage Foundation stack is inherently simpler to configure, manage, and troubleshoot than a system running Storage Foundation and a third party multi-pathing driver.
There are several reasons why you might want to configure DMP to manage multi-pathing instead of MPIO:
■ Using DMP can enhance array performance if an ODM defines properties such as queue depth, queue type, and timeout for the devices.
■ The I/O fencing features of the Storage Foundation HA or Storage Foundation Real Application Cluster software do not work with MPIO devices.
■ The Device Discover Layer (DDL) component of DMP provides value-added services including extended attributes like RAID levels, thin provisioning attributes, hardware mirrors, snapshots, transport type, SFGs, array port IDs.
These services are not available for MPIO-controlled devices.
The various MPIO related restrictions are outlined in Veritas article: TECH51507.
|Array Vendor||ODM definition location|
The applicable ODM definition should permit either DMP or other third-party multi-pathing drivers with discovering the required devices previously managed by MPIO.
Where an appropriate array specific ODM definition is not available, the Veritas vxmpio utility can be used to unmanage the disk (LUN) from MPIO control. The vxmpio utility is not available with older Veritas product releases. The vxmpio utility was first introduced with SF 5.1.x and is constantly being enhanced. Specific array models may require the latest vxmpio utility for specific device ODM attributes to be exposed.
Important: vxmpio use with SAN root/boot disks
Do not use vxmpio to unmanage MPIO controlled SAN root/boot disks. The vxmpio utility was not designed to handle SAN boot disks, the utility will be enhanced to report a user friendly error message going forward, until we can support it.
The SAN root/boot disk should be migrated from MPIO to DMP using the following process:
1. Clone the SAN boot disk to Local disk.
Migrating Vehicle: alt_disk_install / alt_disk_copy
2. Reboot the node from local disk.
3. Disable MPIO using vxmpio command and then boot from Local disk.
4. Clone the Local disk to SAN disk using the commands mentioned in the Symantec Dynamic Multi-Pathing Administrator's Guide :
5. Rebooted the system with SAN disk.
6. Enabled DMP Native Support so that root device is under DMP control.
NOTE: The recommended method is to turn on DMP support for LVM volumes, including the root volume.
# vxdmpadm settune dmp_native_support=on
Disabling MPIO using ODM array definitions
Once the required ODM definition has been obtained, the following procedure can be used to unmanage specific array devices from MPIO to DMP or to a supported third-party multi-pathing driver.
# vxdisk rm <Veritas disk access name)
# rmdev -dl hdisk_device
Alternately, you can use the smitty installp command.
Disabling MPIO using the vxmpio utility
The vxmpio utility controls whether the specified type of storage array is controlled by MPIO or is claimed under the Veritas Dynamic Multi-Pathing driver.
Use the vxmpio utility to disable or enable MPIO for devices with the specified product ID (PID). The vxmpio utility has to be used symmetric, e.g. you can't use it to enable MPIO that's disabled previously by installing ODM definition.
You must reboot the system for the changes to take effect.
Once you have disabled MPIO for a specific PID, the specified devices will be claimed under the Veritas Dynamic Multi-Pathing driver following a restart of the system.
# /etc/vx/bin/vxmpio disable pid='product_id'
Use the AIX "lscfg -vl <device>" command to display the PIDin the Machine Type and Model field. If the value contains a space, then enclose the value in single quotes to pass the value to the vxmpio utility.
# lscfg -vl hdisk9
hdisk9 U9179.MHD.06B94ER-V86-C5-T1-W500507680140C156-L0 MPIO IBM 2145 FC Disk
Machine Type and Model......2145
ROS Level and ID............30303030
Always use the latest available vxmpio version where possible.
# /etc/vx/bin/vxmpio disable pid='2145'
# shutdown -Fr now
# lscfg -vl hdisk9
hdisk9 U9179.MHD.06B94ER-V86-C5-T1-W500507680140C156-L0 IBM 2145 FC Disk