Description
Veritas Dynamic Multi-Pathing (DMP) provides multi-pathing functionality for the operating system native devices that are configured on the system.
When troubleshooting storage performance related issue, you may need to review the DMP I/O policy for SAN attached disks.
Figure 1.0

By default, DMP uses the Minimum Queue I/O policy for load balancing across paths for all array types. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths.
I/O is sent down the path that has the minimum outstanding I/O's.
Traditionally, Round-Robin outperforms Minimumq where the number of paths is above 3.
Key Points:
Systems having a large number of paths with the DMP I/O policy set to Minimum Queue, may experience high average write times in “ms” which can lead to bad performance.
Minimumq is a CPU extensive I/O policy which can lead to poor performance drop.
Minimumq performs a lot of calculations to select a path, hence the increased CPU overhead.
Minimumq provides additional functionality of being Fault Tolerant (i.e
The more code that is traversed, the CPU that is used and hence performance will be slower when using minimumq in some environments.
ArrayTypes
For active/active arrays, "MinimumQ" (also known as "Least Queue Depth") is the default I/O policy, and it often provides the best I/O performance with minimal configuration.
For Active/Passive (A/P) disk arrays, I/O is sent down the primary paths. If all of the primary paths fail, I/O is switched over to the available secondary paths. As the continuous transfer of ownership of LUNs from one controller to another results in severe I/O slowdown, load balancing across primary and secondary paths is not performed for A/P disk arrays unless they support concurrent I/O.
For other arrays, load balancing is performed across all the currently active paths. You can change the I/O policy for the paths to an enclosure or disk array. This operation is an online operation that does not impact the server or require any downtime.
To determine the DMP I/O policy of a specific enclosure, type "vxdmpadm getattr enclosure <enclosure_name> iopolicy"
Note: I/O policies are persistent across reboots of the system.
The vxdmpadm iostat command can be used to gather and display I/O statistics for a specified DMPnode, enclosure, path, port, or controller. The statistics displayed are the CPU usage and amount of memory per CPU used to accumulate statistics, the number of read and write operations, the number of kilobytes read and written, and the average time in milliseconds per kilobyte that is read or written.
MinimumQ
This DMP I/O policy sends I/O on paths that have the minimum (least) number of outstanding I/O requests in the queue for a LUN. No further configuration is possible as DMP internally automatically determines the path with the shortest queue.
# vxdmpadm setattr enclosure <enclosure-name> iopolicy=minimumq
NOTE: This is the default I/O policy for all arrays.
Round-robin
This DMP I/O policy shares I/O equally between the pathsi n a round-robin sequence.
For example, If there are 4 paths, the 1st I/O request would use one path, the 2nd would use a different path, the 3rd would be sent down the 3rd path, the 4th I/O would go down the 4th available (active) path, and return to the 1st path.
No further configuration is possible as this policy is automatically managed by DMP.
The following command sets the I/O policy to round-robin for all Active/Active arrays:
# vxdmpadm setattr arraytype A/A iopolicy=round-robin
Performance Related Tuning:
Recommended Tuning:
1. Change the IO policy from the default minimumq to Round Robin using the command:
# vxdmpadm setattr enclosure <enclosure-name> iopolicy=round-robin
2. Set the value of the tunable dmp_probe_idle_lun from default value “on” to “off” using the command (older product version, prior to 6.1.1):
# vxdmpadm settune dmp_probe_idle_lun=off
3. Stop & restart the DMP iostatistics daemon which is always running using the command:
# vxdmpadm iostat stop
# vxdmpadm iostat start
4. Set the value of the tunable dmp_cache_open to default value “on”. If it is set to “off”, you can set it to “on” using the command:
# vxdmpadm gettune dmp_cache_open
# vxdmpadm settune dmp_cache_open=on