InfoScale™ 9.0 Dynamic Multi-Pathing Administrator's Guide - Solaris
- Understanding DMP
- About Dynamic Multi-Pathing (DMP)
- How DMP works
- Multi-controller ALUA support
- Multiple paths to disk arrays
- Device discovery
- Disk devices
- Disk device naming in DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating ZFS pools to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Solaris Multiplexed I/O (MPxIO)
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Enabling and disabling DMP support for the ZFS root pool
- Adding DMP devices to an existing ZFS pool or creating a new ZFS pool
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Managing DMP devices for the ZFS root pool
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Configuring latency threshold tunable for metro/geo array
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- VxVM coexistence with ZFS
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- About the DMPDR utility
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Manually replacing a host bus adapter on an M5000 server
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Event monitoring
- About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
- Fabric Monitoring and proactive error detection
- Dynamic Multi-Pathing (DMP) automated device discovery
- Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
- DMP event logging
- Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
VxVM coexistence with ZFS
ZFS is a type of file system presenting a pooled storage model that is provided by Oracle for Solaris. File systems can directly draw from a common storage pool (zpool). Volume Manager (VxVM) can be used on the same system as ZFS disks.
VxVM protects devices in use by ZFS from any VxVM operations that may overwrite the disk. These operations include initializing the disk for use by VxVM or encapsulating the disk. If you attempt to perform one of these VxVM operations on a device that is in use by ZFS, VxVM displays an error message.
Before you can manage a ZFS disk with VxVM, you must remove it from ZFS control. Similarly, to begin managing a VxVM disk with ZFS, you must remove the disk from VxVM control.
To determine if a disk is in use by ZFS
- Use the vxdisk list command:
# vxdisk list DEVICE TYPE DISK GROUP STATUS c1t0d0s2 auto:none - - online invalid c1t1d0s2 auto:none - - online invalid c2t5006016130603AE5d2s2 auto:ZFS - - ZFS c2t5006016130603AE5d3s2 auto:SVM - - SVM c2t5006016130603AE5d4s2 auto:cdsdisk - - online c2t5006016130603AE5d5s2 auto:cdsdisk - - online
To reuse a VxVM disk as a ZFS disk
- If the disk is in a disk group, remove the disk from the disk group or destroy the disk group.
To remove the disk from the disk group:
# vxdg [-g diskgroup] rmdisk diskname
To destroy the disk group:
# vxdg destroy diskgroup
- Remove the disk from VxVM control
# /usr/lib/vxvm/bin/vxdiskunsetup diskname
- You can now initialize the disk as a ZFS device using ZFS tools.
See the Oracle documentation for details.
You must perform step 1 and step 2 in order for VxVM to recognize a disk as ZFS device.
To reuse a ZFS disk as a VxVM disk
- Remove the disk from the zpool, or destroy the zpool.
See the Oracle documentation for details.
- Clear the signature block using the dd command:
# dd if=/dev/zero of=/dev/rdsk/c#t#d#s# oseek=16 bs=512 count=1
Where c#t#d#s# is the disk slice on which the ZFS device is configured. If the whole disk is used as the ZFS device, clear the signature block on slice 0.
- You can now initialize the disk as a VxVM device using the vxdiskadm command or the vxdisksetup command.