InfoScale™ 9.0 Dynamic Multi-Pathing Administrator's Guide - Solaris
- Understanding DMP
- About Dynamic Multi-Pathing (DMP)
- How DMP works
- Multi-controller ALUA support
- Multiple paths to disk arrays
- Device discovery
- Disk devices
- Disk device naming in DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating ZFS pools to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Solaris Multiplexed I/O (MPxIO)
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Enabling and disabling DMP support for the ZFS root pool
- Adding DMP devices to an existing ZFS pool or creating a new ZFS pool
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Managing DMP devices for the ZFS root pool
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Configuring latency threshold tunable for metro/geo array
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- VxVM coexistence with ZFS
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- About the DMPDR utility
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Manually replacing a host bus adapter on an M5000 server
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Event monitoring
- About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
- Fabric Monitoring and proactive error detection
- Dynamic Multi-Pathing (DMP) automated device discovery
- Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
- DMP event logging
- Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Manually replacing a host bus adapter on an M5000 server
This section contains the procedure to replace an online host bus adapter (HBA) when DMP is managing multi-pathing in a Cluster File System (CFS) cluster. The HBA World Wide Port Name (WWPN) changes when the HBA is replaced. Following are the prerequisites to replace an online host bus adapter:
A single node or two or more node CFS or RAC cluster.
I/O running on CFS file system.
An M5000 server with at least two HBAs in separate PCIe slots and recommended Solaris patch level for HBA replacement.
To replace an online host bus adapter on an M5000 server
- Identify the HBAs on the M5000 server. For example, to identify Emulex HBAs, enter the following command:
/usr/platform/sun4u/sbin/prtdiag -v | grep emlx 00 PCIe 0 2, fc20, 10df 119, 0, 0 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0 00 PCIe 0 2, fc20, 10df 119, 0, 1 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1 00 PCIe 3 2, fc20, 10df 2, 0, 0 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@3,700000/SUNW,emlxs@0 00 PCIe 3 2, fc20, 10df 2, 0, 1 okay 4, 4 SUNW,emlxs-pci10df,fc20 LPe 11002-S /pci@3,700000/SUNW,emlxs@0,1
- Identify the HBA and its WWPN(s), which you want to replace using the cfgadm command.
To identify the HBA, enter the following:
# cfgadm -al | grep -i fibre iou#0-pci#1 fibre/hp connected configured ok iou#0-pci#4 fibre/hp connected configured ok
To list all HBAs, enter the following:
# luxadm -e port /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0/fp@0,0:devctl NOT CONNECTED /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0:devctl CONNECTED /devices/pci@3,700000/SUNW,emlxs@0/fp@0,0:devctl NOT CONNECTED /devices/pci@3,700000/SUNW,emlxs@0,1/fp@0,0:devctl CONNECTED
To select the HBA to dump the portap and get the WWPN, enter the following:
# luxadm -e dump_map /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/ fp@0,0:devctl 0 304700 0 203600a0b847900c 200600a0b847900c 0x0 (Disk device) 1 30a800 0 20220002ac00065f 2ff70002ac00065f 0x0 (Disk device) 2 30a900 0 21220002ac00065f 2ff70002ac00065f 0x0 (Disk device) 3 560500 0 10000000c97c3c2f 20000000c97c3c2f 0x1f (Unknown Type) 4 560700 0 10000000c97c9557 20000000c97c9557 0x1f (Unknown Type) 5 560b00 0 10000000c97c34b5 20000000c97c34b5 0x1f (Unknown Type) 6 560900 0 10000000c973149f 20000000c973149f 0x1f (Unknown Type,Host Bus Adapter)
Alternately, you can run the fcinfo hba-port Solaris command to get the WWPN(s) for the HBA ports.
- Ensure you have a compatible spare HBA for hot-swap.
- Stop the I/O operations on the HBA port(s) and disable the DMP subpath(s) for the HBA that you want to replace.
# vxdmpadm disable ctrl=ctrl#
- Dynamically unconfigure the HBA in the PCIe slot using the cfgadm command.
# cfgadm -c unconfigure iou#0-pci#1
Look for console messages to check if the cfgadm command is unsuccessful. If the cfgadm command is unsuccessful, proceed to troubleshooting using the server hardware documentation. Check the Solaris 11 patch level recommended for dynamic reconfiguration operations and contact SUN support for further assistance.
console messages Oct 24 16:21:44 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2): card is removed from the slot iou 0-pci 1
- Verify that the HBA card that is being replaced in step 5 is not in the configuration. Enter the following command:
# cfgadm -al | grep -i fibre iou 0-pci 4 fibre/hp connected configured ok
- Mark the fiber cable(s).
- Remove the fiber cable(s) and the HBA that you must replace.
For more information, see the HBA replacement procedures in SPARC Enterprise M4000/M5000/M8000/M9000 Servers Dynamic Reconfiguration (DR) User's Guide.
- Replace the HBA with a new compatible HBA of similar type in the same slot. The reinserted card shows up as follows:
console messages iou 0-pci 1 unknown disconnected unconfigured unknown
- Bring the replaced HBA back into the configuration. Enter the following:
# cfgadm -c configure iou 0-pci 1 console messages Oct 24 16:21:57 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2): card is inserted in the slot iou#0-pci#1 (pci dev 0)
- Verify that the reinserted HBA is in the configuration. Enter the following:
# cfgadm -al | grep -i fibre iou#0-pci 1 fibre/hp connected configured ok <==== iou#0-pci 4 fibre/hp connected configured ok
- Modify fabric zoning to include the replaced HBA WWPN(s).
- Enable LUN security on storage for the new WWPN(s).
- Perform an operating system device scan to re-discover the LUNs. Enter the following:
# cfgadm -c configure c3
- Clean up the device tree for old LUNs. Enter the following:
# devfsadm -Cv
Note:
Sometimes replacing an HBA creates new devices. Perform cleanup operations for the LUN only when new devices are created.
- If DMP does not show a ghost path for the removed HBA path, enable the path using the vxdmpadm command. This performs the device scan for that particular HBA subpath(s). Enter the following:
# vxdmpadm enable ctrl=ctrl#
- Verify if I/O operations are scheduled on that path. If I/O operations are running correctly on all paths, the dynamic HBA replacement operation is complete.