Dynamic Multi-Pathing 7.4.1 Administrator's Guide - Linux
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Linux Device Mapper Multipath
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Event monitoring
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Example of applying load balancing in a SAN
This example describes how to use Dynamic Multi-Pathing (DMP) to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches.
As shown in this sample output from the vxdisk list command, the device sdm has eight primary paths:
# vxdisk list sdq
Device: sdq
.
.
.
numpaths: 8
sdj state=enabled type=primary
sdk state=enabled type=primary
sdl state=enabled type=primary
sdm state=enabled type=primary
sdn state=enabled type=primary
sdo state=enabled type=primary
sdp state=enabled type=primary
sdq state=enabled type=primaryIn addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.
The first step is to enable the gathering of DMP statistics:
# vxdmpadm iostat start
Next, use the dd command to apply an input workload from the volume:
# dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null &
By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, sdq:
# vxdmpadm iostat show dmpnodename=sdq interval=5 count=2
.
.
.
cpu usage = 11294us per cpu memory = 32768b
OPERATIONS KBYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdj 0 0 0 0 0.00 0.00
sdk 0 0 0 0 0.00 0.00
sdl 0 0 0 0 0.00 0.00
sdm 0 0 0 0 0.00 0.00
sdn 0 0 0 0 0.00 0.00
sdo 0 0 0 0 0.00 0.00
sdp 0 0 0 0 0.00 0.00
sdq 10986 0 5493 0 0.41 0.00The vxdmpadm command is used to display the I/O policy for the enclosure that contains the device:
# vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 MinimumQ Single-Active
This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.
To balance the I/O load across the multiple primary paths, the policy is set to round-robin as shown here:
# vxdmpadm setattr enclosure ENC0 iopolicy=round-robin # vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 MinimumQ Round-Robin
The DMP statistics are now reset:
# vxdmpadm iostat reset
With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen.
# vxdmpadm iostat show dmpnodename=sdq interval=5 count=2
.
.
.
cpu usage = 14403us per cpu memory = 32768b
OPERATIONS KBYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdj 2041 0 1021 0 0.39 0.00
sdk 1894 0 947 0 0.39 0.00
sdl 2008 0 1004 0 0.39 0.00
sdm 2054 0 1027 0 0.40 0.00
sdn 2171 0 1086 0 0.39 0.00
sdo 2095 0 1048 0 0.39 0.00
sdp 2073 0 1036 0 0.39 0.00
sdq 2042 0 1021 0 0.39 0.00The enclosure can be returned to the single active I/O policy by entering the following command:
# vxdmpadm setattr enclosure ENC0 iopolicy=singleactive