Dynamic Multi-Pathing 7.4.1 Administrator's Guide - Linux
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Linux Device Mapper Multipath
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Event monitoring
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Examples of using the vxdmpadm iostat command
Dynamic Multi-Pathing (DMP) enables you to gather and display I/O statistics with the vxdmpadm iostat command. This section provides an example session using the vxdmpadm iostat command.
The first command enables the gathering of I/O statistics:
# vxdmpadm iostat start
The next command displays the current statistics including the accumulated total numbers of read and write operations, and the kilobytes read and written, on all paths.
# vxdmpadm -u k iostat show all
cpu usage = 7952us per cpu memory = 8192b
OPERATIONS BYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdf 87 0 44544k 0 0.00 0.00
sdk 0 0 0 0 0.00 0.00
sdg 87 0 44544k 0 0.00 0.00
sdl 0 0 0 0 0.00 0.00
sdh 87 0 44544k 0 0.00 0.00
sdm 0 0 0 0 0.00 0.00
sdi 87 0 44544k 0 0.00 0.00
sdn 0 0 0 0 0.00 0.00
sdj 87 0 44544k 0 0.00 0.00
sdo 0 0 0 0 0.00 0.00
sdj 87 0 44544k 0 0.00 0.00
sdp 0 0 0 0 0.00 0.00The following command changes the amount of memory that vxdmpadm can use to accumulate the statistics:
# vxdmpadm iostat start memory=4096
The displayed statistics can be filtered by path name, DMP node name, and enclosure name (note that the per-CPU memory has changed following the previous command):
# vxdmpadm -u k iostat show pathname=sdk
cpu usage = 8132us per cpu memory = 4096b
OPERATIONS BYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdk 0 0 0 0 0.00 0.00
# vxdmpadm -u k iostat show dmpnodename=sdf
cpu usage = 8501us per cpu memory = 4096b
OPERATIONS BYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdf 1088 0 557056k 0 0.00 0.00
# vxdmpadm -u k iostat show enclosure=Disk
cpu usage = 8626us per cpu memory = 4096b
OPERATIONS BYTES AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdf 1088 0 557056k 0 0.00 0.00You can also specify the number of times to display the statistics and the time interval. Here the incremental statistics for a path are displayed twice with a 2-second interval:
# vxdmpadm iostat show pathname=sdk interval=2 count=2
cpu usage = 9621us per cpu memory = 266240b
OPERATIONS BLOCKS AVG TIME(ms)
PATHNAME READS WRITES READS WRITES READS WRITES
sdk 0 0 0 0 0.00 0.00
sdk 0 0 0 0 0.00 0.00