InfoScale™ 9.0 Dynamic Multi-Pathing Administrator's Guide - Linux
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Linux Device Mapper Multipath
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Configuring latency threshold tunable for metro/geo array
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- About the DMPDR utility
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Event monitoring
- About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
- Fabric Monitoring and proactive error detection
- Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
- DMP event logging
- Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
- Handling Fabric Performance Impact Notification (FPIN) events
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Handling Fabric Performance Impact Notification (FPIN) events
DMP handles switch level Fabric Performance Impact Notification (FPIN) events to avoid degradation in IO performance. In certain cases, IO performance of the storage subsystem is degraded due to a link failure or a congestion event in the fabric. Multipathing solutions that manage a storage subsystem have the capability to avoid using affected paths and choosing an alternate path for performing IO.
To mitigate such performance degradation scenarios, fabric vendors provide a functionality to receive fabric-specific notifications for each path. To get fabric-specific notifications, the host (end device) has to register with the fabric. After registration, the host receives these events through a Fibre Channel Extended Link Service (ELS) function that is known as Fabric Performance Impact Notifications (FPINs). Multipathing solutions can use these events to evaluate error conditions and use an alternate path, if required.
The DMP event source daemon (vxesd) listens to fabric events through the netlink socket and reacts to the fabric performance notification events. The vxesd daemon analyses the FPIN events to identify the affected path and avoids using it until the link or the congestion event subsides. The daemon thus prevents IO throughput degradation. Any performance blips caused due to such events are avoided. After the link or the congestion subsides, The affected path is used for the IO again.
The vxesd daemon monitors the FPIN events and marks a path as a standby path on receiving a link integrity event. When a path is marked as a standby, DMP does not send any new IO to that path unless the standby path is the last path to the device. DMP avoids using the standby path to reduce the performance impact due to the event. After the link condition is resolved, DMP moves the standby paths to an active state and resumes IO on that path.
Note:
The FPIN event monitoring functionality does not work with an SRDF or a VPLEX Metro configuration. Arctera recommends that you disable this feature in such configurations.
The DMP tunable dmp_monitor_fpin_event tracks the FPIN monitoring functionality status and the tunable is disabled by default.
To display the current status of the FPIN monitoring functionality, use the following command:
# vxdmpadm gettune dmp_monitor_fpin_event
To enable the FPIN monitoring functionality, use the following command:
# vxdmpadm settune dmp_monitor_fpin_event=on
To disable the FPIN monitoring functionality, use the following command:
# vxdmpadm settune dmp_monitor_fpin_event=off
The value of the tunable is persistent across restarts.