Dynamic Multi-Pathing 7.4.1 Administrator's Guide - Linux
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Linux Device Mapper Multipath
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Event monitoring
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Displaying cumulative I/O statistics
The vxdmpadm iostat command provides the ability to analyze the I/O load distribution across various I/O channels or parts of I/O channels. Select the appropriate filter to display the I/O statistics for the DMP node, controller, array enclosure, path, port, or virtual machine. Then, use the groupby clause to display cumulative statistics according to the criteria that you want to analyze. If the groupby clause is not specified, then the statistics are displayed per path.
When you combine the filter and the groupby clause, you can analyze the I/O load for the required use case scenario. For example:
To compare I/O load across HBAs, enclosures, or array ports, use the groupby clause with the specified attribute.
To analyze I/O load across a given I/O channel (HBA to array port link), use filter by HBA and PWWN or enclosure and array port.
To analyze I/O load distribution across links to an HBA, use filter by HBA and groupby array port.
Use the following format of the iostat command to analyze the I/O loads:
# vxdmpadm [-u unit] iostat show [groupby=criteria] {filter} \
[interval=seconds [count=N]]The above command displays I/O statistics for the devices specified by the filter. The filter is one of the following:
all
ctlr=ctlr-name
dmpnodename=dmp-node
enclosure=enclr-name [portid=array-portid ] [ctlr=ctlr-name]
pathname=path-name
pwwn=array-port-wwn[ctlr=ctlr-name]
You can aggregate the statistics by the following groupby criteria:
arrayport
ctlr
dmpnode
enclosure
By default, the read/write times are displayed in milliseconds up to 2 decimal places. The throughput data is displayed in terms of BLOCKS, and the output is scaled, meaning that the small values are displayed in small units and the larger values are displayed in bigger units, keeping significant digits constant. You can specify the units in which the statistics data is displayed. The -u option accepts the following options:
| Displays throughput in the highest possible unit. |
| Displays throughput in kilobytes. |
| Displays throughput in megabytes. |
| Displays throughput in gigabytes. |
| Displays throughput in exact number of bytes. |
| Displays average read/write time in microseconds. |
To group by DMP node:
# vxdmpadm [-u unit] iostat show groupby=dmpnode \ [all | dmpnodename=dmpnodename | enclosure=enclr-name]
To group by controller:
# vxdmpadm [-u unit] iostat show groupby=ctlr [ all | ctlr=ctlr ]
For example:
# vxdmpadm iostat show groupby=ctlr ctlr=c5
OPERATIONS BLOCKS AVG TIME(ms) CTLRNAME READS WRITES READS WRITES READS WRITES c5 224 14 54 7 4.20 11.10
To group by arrayport:
# vxdmpadm [-u unit] iostat show groupby=arrayport [ all \ | pwwn=array_pwwn | enclosure=enclr portid=array-port-id ]
For example:
# vxdmpadm -u m iostat show groupby=arrayport \ enclosure=HDS9500-ALUA0 portid=1A
OPERATIONS BYTES AVG TIME(ms) PORTNAME READS WRITES READS WRITES READS WRITES 1A 743 1538 11m 24m 17.13 8.61
To group by enclosure:
# vxdmpadm [-u unit] iostat show groupby=enclosure [ all \ | enclosure=enclr ]
For example:
# vxdmpadm -u h iostat show groupby=enclosure enclosure=EMC_CLARiiON0
OPERATIONS BLOCKS AVG TIME(ms) ENCLOSURENAME READS WRITES READS WRITES READS WRITES EMC_CLARiiON0 743 1538 11392k 24176k 17.13 8.61
You can also filter out entities for which all data entries are zero. This option is especially useful in a cluster environment that contains many failover devices. You can display only the statistics for the active paths.
To filter all zero entries from the output of the iostat show command:
# vxdmpadm [-u unit] -z iostat show [all|ctlr=ctlr_name | dmpnodename=dmp_device_name | enclosure=enclr_name [portid=portid] | pathname=path_name|pwwn=port_WWN][interval=seconds [count=N]]
For example:
# vxdmpadm -z iostat show dmpnodename=emc_clariion0_893
cpu usage = 9852us per cpu memory = 266240b
OPERATIONS BLOCKS AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdbc 32 0 258 0 0.04 0.00
sdbw 27 0 216 0 0.03 0.00
sdck 8 0 57 0 0.04 0.00
sdde 11 0 81 0 0.15 0.00To display average read/write times in microseconds.
# vxdmpadm -u us iostat show pathname=sdck
cpu usage = 9865us per cpu memory = 266240b
OPERATIONS BLOCKS AVG TIME(us)
PATHNAME READS WRITES READS WRITES READS WRITES
sdck 8 0 57 0 43.04 0.00