VxDMP does not (re) enable the paths that are in DISABLED(M) state after a storage port enable operation has been performed on a port that was previously disabled.
Once the port is re-enabled at the storage/SAN level, the Linux Operating System SCSI udev add events discover and identify the OS device paths to be enabled correctly in the Operating System, but the "/etc/vx/vxvm-udev" script does not seem to perform the task to (re) enable the VxDMP paths automatically as expected.
To clearly understand how VxDMP handles udev device add events, please refer to the following article:
Explanation of the problem where paths remained in DISABLED(M) state:
On linux, vxesd listen to UDEV disk add events and triggers discovery of these devices. At boot time as there is a time gap between vxconfigd startup (vxvm-startup.sh) and vxesd startup (vxvm-recover).
Some UDEV events might be lost and hence, to handle this, during this time gap, newly discovered devnames are copied to UDEVFILE i.e. "/etc/vx/.udevfile". After vxesd startup, if UDEVFILE is non zero, then the vxvm-recover script will trigger a discovery for all the devices listed in the UDEVFILE. Eventually, after the device discovery, the UDEVFILE i.e. "/etc/vx/.udevfile" should have been removed from the system.
However, if there is a situation where the UDEVFILE is not cleared, i.e. there is existence of "/etc/vx/.udevfile", and if this file may contain any device paths listed within this file, then the discovery of these device paths will be skipped by the the "/lib/udev/vxvm-udev.sh" script which is invoked immediately following the SCSI udev event.
Whenever udev add events are triggered, "/lib/udev/vxvm-udev.sh" script checks for existence of "/etc/vx/.udevfile" and adds devices to this file, and because of which events are not posted to vxesd and paths remain in DISABLED(M) state.
There could be some situations where this issue could occur:
1.) The Storage Foundation product is installed but not configured, then the system is reboot and then a manual configuration is performed. Now until such time that the vxvm-recover script is not executed, i.e. at next boot time, the "/etc/vx/.udevfile" may exist with device paths listed. Hence all the device paths presented to the host may be listed in this file, and thereby the VxDMP discovery for the devices may not occur, leaving the paths in DISABLED(M) state.
2.) The system has boot up successfully, but due to some issues, the vxconfigd never went into "enabled" state when the "vxvm-recover" script was initiated. Even though this situation is very rare to occur, it is possible under certain conditions due to underlying device issues, vxconfigd may not be able to start up in "enabled" mode at boot time, and this may lead to the "/etc/vx/.udevfile" to be present in the system with device paths listed and hence leading to the same situation where any device paths that get into a DISABLED(M) state, would not be (re) enabled in VxDMP automatically even if the SCSI discovers the paths to be enabled.
If the situation has arisen where the paths did get Disabled (M) and did not re-enable due to the issues explained above, then simply removing or truncating the "/etc/vx/.udevfile" file and then performing a "vxdctl scandisks" should then (re) enable the devices.
In case if the situation has not arisen yet, but the existence of the "/etc/vx/.udevfile" with devices listed in the file is noticed, then simply removing or truncating the "/etc/vx/.udevfile" file and then performing a "vxdctl enable" will ensure that the "/lib/udev/vxvm-udev.sh" script will enable any paths that are discovered by the SCSI UDEV disk add events.
Storage Foundation 5.1SP1_x and/or 6.x on Linux platforms with VxDMP enabled