Please enter search query.
Search <book_title>...
Dynamic Multi-Pathing 7.3.1 Administrator's Guide - Linux
Last Published:
2017-11-04
Product(s):
InfoScale & Storage Foundation (7.3.1)
- Understanding DMP
- Setting up DMP to manage native devices
- About setting up DMP to manage native devices
- Displaying the native multi-pathing configuration
- Migrating LVM volume groups to DMP
- Migrating to DMP from EMC PowerPath
- Migrating to DMP from Hitachi Data Link Manager (HDLM)
- Migrating to DMP from Linux Device Mapper Multipath
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
- Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
- Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
- Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
- Removing DMP support for native devices
- Administering DMP
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using the vxdmpadm utility
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- User-friendly CLI outputs for ALUA arrays
- Displaying information about devices controlled by third-party drivers
- Displaying extended device attributes
- Suppressing or including devices from VxVM control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers, array ports, or DMP nodes
- Enabling I/O for paths, controllers, array ports, or DMP nodes
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing (LIPP)
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Configuring Array Policy Modules
- Administering disks
- About disk management
- Discovering and configuring newly added disk devices
- Partial device discovery
- About discovering disks and dynamically adding disk arrays
- About third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing disks claimed in the DISKS category
- Displaying details about an Array Support Library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Changing the disk device naming scheme
- Discovering the association between enclosure-based disk names and OS-based disk names
- Dynamic Reconfiguration of devices
- About online Dynamic Reconfiguration
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Overview of manually reconfiguring a LUN
- Manually removing LUNs dynamically from an existing target ID
- Manually adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Manually cleaning up the operating system device tree after removing LUNs
- Changing the characteristics of a LUN from the array side
- Upgrading the array controller firmware online
- Reformatting NVMe devices manually
- Event monitoring
- Performance monitoring and tuning
- About tuning Dynamic Multi-Pathing (DMP) with templates
- DMP tuning templates
- Example DMP tuning template
- Tuning a DMP host with a configuration attribute template
- Managing the DMP configuration files
- Resetting the DMP tunable parameters and attributes to the default values
- DMP tunable parameters and attributes that are supported for templates
- DMP tunable parameters
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
When the dmp_native_support is ON, you can create a new LVM volume group on an available DMP device. You can also add an available DMP device to an existing LVM volume group. After the LVM volume groups are on DMP devices, you can use any of the LVM commands to manage the volume groups.
To create a new LVM volume group on a DMP device or add a DMP device to an existing LVM volume group
- Choose disks that are available for use by LVM.
Use the vxdisk list command to identify these types of disks.
Disks that are not in use by VxVM
The output of vxdisk list shows these disks with the Type auto:none and the Status as online invalid.
The example shows available disks.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS . . . tagmastore-usp0_0035 auto:none - - online invalid tagmastore-usp0_0036 auto:none - - online invalid
- Create a new LVM volume group on a DMP device.
Use the complete path name for the DMP device.
# pvcreate /dev/vx/dmp/tagmastore-usp0_0035 Physical volume "/dev/vx/dmp/tagmastore-usp0_0035" successfully created # # vgcreate /dev/newvg /dev/vx/dmp/tagmastore-usp0_0035 Volume group "newvg" successfully created # vgdisplay -v newvg |grep Name Using volume group(s) on command line Finding volume group "newvg" VG Name newvg PV Name /dev/vx/dmp/tagmastore-usp0_0035s3 - Add a DMP device to an existing LVM volume group.
Use the complete path name for the DMP device.
# pvcreate /dev/vx/dmp/tagmastore-usp0_0036 Physical volume "/dev/vx/dmp/tagmastore-usp0_0036" successfully created # vgextend newvg /dev/vx/dmp/tagmastore-usp0_0036 Volume group "newvg" successfully extended # vgdisplay -v newvg |grep Name Using volume group(s) on command line Finding volume group "newvg" VG Name newvg PV Name /dev/vx/dmp/tagmastore-usp0_0035s3 PV Name /dev/vx/dmp/tagmastore-usp0_0036s3 - Run the following command to trigger DMP discovery of the devices:
# vxdisk scandisks
- After the discovery completes, the disks are shown as in use by LVM:
# vxdisk list
. . . tagmastore-usp0_0035 auto:LVM - - LVM tagmastore-usp0_0036 auto:LVM - - LVM
- For all of the LVM volume entries, add '_netdev' to the mount options in /etc/fstab. This option ensures that these volumes are enabled after DMP devices are discovered.