Please enter search query.
Search <book_title>...
Dynamic Multi-Pathing 7.3.1 Administrator's Guide - AIX
Last Published:
2017-11-03
Product(s):
InfoScale & Storage Foundation (7.3.1)
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from IBM Multipath IO (MPIO) or MPIO path control module (PCM)
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Dynamic Multi-Pathing for the Virtual I/O Server
- About Dynamic Multi-Pathing in a Virtual I/O server
- About the Volume Manager (VxVM) component in a Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) on Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) pseudo devices as virtual SCSI devices
- Extended attributes in VIO client for a virtual SCSI disk
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Configuring DMP for SAN booting
- Administering the root volume group (rootvg) under DMP control
- Running the bosboot command when LVM rootvg is enabled for DMP
- Extending an LVM rootvg that is enabled for DMP
- Reducing the native rootvg that is enabled for DMP
- Mirroring the root volume group
- Removing the mirror for the root volume group (rootvg)
- Cloning a LVM rootvg that is enabled for DMP
- Cleaning up the alternate disk volume group when LVM rootvg is enabled for DMP
- Using mksysb when the root volume group is under DMP control
- Upgrading Dynamic Multi-Pathing and AIX on a DMP-enabled rootvg
- Using Storage Foundation in the logical partition (LPAR) with virtual SCSI devices
- Setting up DMP for vSCSI devices in the logical partition (LPAR)
- About disabling DMP for vSCSI devices in the logical partition (LPAR)
- Preparing to install or upgrade Storage Foundation with DMP disabled for vSCSI devices in the logical partition (LPAR)
- Disabling DMP multi-pathing for vSCSI devices in the logical partition (LPAR) after installation or upgrade
- Adding and removing DMP support for vSCSI devices for an array
- How DMP handles I/O for vSCSI devices
- Running alt_disk_install, alt_disk_copy and related commands on the OS device when DMP native support is enabled
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Manually replacing a host bus adapter online
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Event monitoring
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- DMP driver tunables
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Migrating from PowerPath to DMP on a Virtual I/O server for a dual-VIOS configuration
This following example procedure illustrates a migration from PowerPath to DMP on the Virtual I/O server, in a configuration with two VIO Servers.
Example configuration values:
Managed System: dmpviosp6 VIO server1: dmpvios1 VIO server2: dmpvios2 VIO clients: dmpvioc1 SAN LUNs: EMC Clariion array Current multi-pathing solution on VIO server: EMC PowerPath
To migrate dmpviosp6 from PowerPath to DMP
- Before migrating, back up the Virtual I/O server to use for reverting the system in case of issues.
See the IBM website for information about backing up Virtual I/O server.
- Shut down all of the VIO clients that are serviced by the VIO Server.
dmpvioc1$ halt
- Log into the VIO server partition.Use the following command to access the non-restricted root shell. All subsequent commands in this procedure must be invoked from the non-restricted shell.
$ oem_setup_env
- The following command shows lsmap output before migrating PowerPath VTD devices to DMP:
dmpvios1$ /usr/ios/cli/ioscli lsmap -all
SVSA Physloc Client Partition ID -------------- ---------------------------- -------------------- vhost0 U9117.MMA.0686502-V2-C11 0x00000004 VTD P0 Status Available LUN 0x8100000000000000 Backing device hdiskpower0 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L4 0034037 00000000 VTD P1 Status Available LUN 0x8200000000000000 Backing device hdiskpower1 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 0240C10 0000000 VTD P2 Status Available LUN 0x8300000000000000 Backing device hdiskpower2 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 02409A00000000
- Unconfigure all VTD devices from all virtual adapters on the system:
dmpvios1$ rmdev -p vhost0 P0 Defined P1 Defined P2 Defined
Repeat this step for all other virtual adapters.
- Migrate the devices from PowerPath to DMP.
Unmount the file system and varyoff volume groups residing on the PowerPath devices.
Display the volume groups (vgs) in the configuration:
dmpvios1$ lsvg rootvg brunovg
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdiskpower3 active 511 501 103..92..102..102..102
Use the varyoffvg command on all affected vgs:
dmpvios1$ varyoffvg brunovg
Unmanage the EMC Clariion array from PowerPath control
# powermt unmanage class=clariion hdiskpower0 deleted hdiskpower1 deleted hdiskpower2 deleted hdiskpower3 deleted
- Reboot VIO server1
dmpvios1$ reboot
- After the VIO server1 reboots, verify that all of the existing volume groups on the VIO server1 and MPIO VTDs on the VIO server1 are successfully migrated to DMP.
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION emc_clari0_138 active 511 501 103..92..102..102..102
Verify the mappings of the LUNs on the migrated volume groups:
dmpvios1$ lsmap -all
SVSA Physloc Client Partition ID -------------- -------------------------- ------------------ vhost0 U9117.MMA.0686502-V2-C11 0x00000000 VTD P0 Status Available LUN 0x8100000000000000 Backing device emc_clari0_130 Physloc VTD P1 Status Available LUN 0x8200000000000000 Backing device emc_clari0_136 Physloc VTD P2 Status Available LUN 0x8300000000000000 Backing device emc_clari0_137 Physloc
- Repeat step 1 to step 8 for VIO server2.
- Start all of the VIO clients.