SOLARIS: How to convert the cfgadm failing state to the intended unusable state

SOLARIS: How to convert the cfgadm failing state to the intended unusable state

  • Article ID:100044068
  • Last Published:
  • Product(s):InfoScale & Storage Foundation

Description

This document outlines how you can convert the cfgadm failing state for a given access path to the intended unusable path state when a LUN has been intentionally removed at the storage array layer.

 

SCENARIO

Without informing the sysadmin, the storage admin has already removed the LUN from the assigned server.

This results in the cfgadm (Leadville stack) reporting the access path state as “failing”, as a direct result of failing to execute the correct sequence of steps to remove the LUN.

 

KEY POINTS

  •   The Peripheral Qualifier Bit reflects a non-zero value

  •   The cfgadm access path reflects failing instead of the expected unusable path state

 

REPRODUCTION STEPS

 

1.       The script remove_lun.sh will perform the following commands to remove the LUN from the Storage group:

 

          # /opt/Navisphere/bin/naviseccli -h <IP_address> -secfilepath /root storagegroup -list -gname vmsolaris01
 

          # /opt/Navisphere/bin/naviseccli -h <IP_address> -secfilepath /root storagegroup -removehlu -gname vmsolaris01 -hlu 1 -o

 

2.       We can verify HUL 1 and ALU (AVID) number 5 has been removed from the Storage Group using:

 

# /opt/Navisphere/bin/naviseccli -h <IP_address> -secfilepath /root storagegroup -list -gname vmsolaris01


Storage Group Name:    vmsolaris01
Storage Group UID:     8F:65:89:A4:4A:9A:E8:11:BF:A6:00:60:16:3A:86:D9
HBA/SP Pairs:

  HBA UID                                          SP Name     SPPort
  -------                                          -------     ------
  20:00:00:00:C9:7B:FA:12:10:00:00:00:C9:7B:FA:12   SP B         1
  20:00:00:00:C9:7B:FA:13:10:00:00:00:C9:7B:FA:13   SP B         1
  10:00:00:00:C9:7B:FA:12:20:00:00:00:C9:7B:FA:12   SP A         0
  10:00:00:00:C9:7B:FA:13:20:00:00:00:C9:7B:FA:13   SP A         0
  20:00:00:00:C9:7B:FA:12:10:00:00:00:C9:7B:FA:12   SP A         1
  20:00:00:00:C9:7B:FA:13:10:00:00:00:C9:7B:FA:13   SP A         1

HLU/ALU Pairs:

  HLU Number     ALU Number
  ----------     ----------
    0               492
    2               489
    3               75
    4               458
    5               73
    6               491
    7               457
    8               493
    9               72
Shareable:             YES

 

3.       Define the LUN variable to reference the required DMPNODE:

 

          # LUN="emc_clariion0_5"

 

4.       Extract the c#::<WWN> details for the paths associated with the DMPNODE:

 

          # ACCESS=`vxdisk path | sed "s/t/::/g;s/d/ /g;s/s2//g" | egrep "(${LUN})" | awk '{ print $1","$2 }' | tr '[:upper:]' '[:lower:]' | tr -s "\n" "|"`

 

5.       The cfgadm state reflects the failing state, as shown by the following cfgadm command:

 

# cfgadm -alo show_FCP_dev | egrep "${ACCESS}"
c3::500601613ea07595,1         disk         connected    configured   failing
c3::500601693ea07595,1         disk         connected    configured   failing
c4::500601613ea07595,1         disk         connected    configured   failing
c4::500601693ea07595,1         disk         connected    configured   failing

 

 

6.       As the LUN has been intentionally masked at the Storage Layer, the Peripheral Qualifier bit will return a non-zero value. This can normally be verified using the Veritas vxscsiinq utility

 

As the LUN is no longer accessible in this instance, it will return an error like what is shown below:

 

# ./vxdmpadm_remove_list |  grep $LUN

ioctl failed: I/O error

VxVM vxscsiinq ERROR V-5-1-8853 /etc/vx/diag.d/vxscsiinq for /dev/rdsk/c3t500601613EA07595d1s2. ioctl failed., evpd 0 page code 0. I/O error

c3t500601613EA07595d1s2 ENABLED(A) Active/Optimized(P) emc_clariion0_5 emc_clariion0 c3              -         -

ioctl failed: I/O error

VxVM vxscsiinq ERROR V-5-1-8853 /etc/vx/diag.d/vxscsiinq for /dev/rdsk/c3t500601693EA07595d1s2. ioctl failed., evpd 0 page code 0. I/O error

c3t500601693EA07595d1s2 ENABLED    Active/Non-Optimized emc_clariion0_5 emc_clariion0 c3              -         -

ioctl failed: I/O error

VxVM vxscsiinq ERROR V-5-1-8853 /etc/vx/diag.d/vxscsiinq for /dev/rdsk/c4t500601613EA07595d1s2. ioctl failed., evpd 0 page code 0. I/O error

c4t500601613EA07595d1s2 ENABLED(A) Active/Optimized(P) emc_clariion0_5 emc_clariion0 c4              -         -

c4t500601693EA07595d1s2 ENABLED    Active/Non-Optimized emc_clariion0_5 emc_clariion0 c4              -         -      20

 

 

SCRIPT

 

# cat vxdmpadm_remove_list

#! /bin/ksh

 

details=`vxdmpadm getsubpaths all | egrep -v "NAME|="`

IFS=$'\n'

 

headerline=`vxdmpadm getsubpaths all | ggrep -a NAME`

append="Peripheral Qualifier/ Device Type"

echo "$headerline     $append"

echo "==================================================================================================================================="

 

 

for Line in $details

do

path=`echo $Line | awk '{print $1}'`

append=`/etc/vx/diag.d/vxscsiinq -d $path | ggrep -a Peripheral | awk '{print $5}'`

echo "$Line      $append"

done

 

OIFS=$IFS

 

CLEAN-UP COMMANDS
 

The following command sequence can be used to clean-up the cfgadm stack.


1.    Extract the OS native path (c#t#d#) details from the vxdisk path command output:
 

# OFFLINE=`vxdisk path | grep $LUN | awk '{ print "luxadm -e offline /dev/rdsk/"$1}'`
 

# echo "$OFFLINE"
luxadm -e offline /dev/rdsk/c3t500601693EA07595d1s2
luxadm -e offline /dev/rdsk/c4t500601613EA07595d1s2
luxadm -e offline /dev/rdsk/c4t500601693EA07595d1s2
luxadm -e offline /dev/rdsk/c3t500601613EA07595d1s2
 

2.    Extract the access path details to specify with the cfgadm unconfigure operation later:

# CFGADM=`vxdisk path | sed "s/t/::/g;s/d/ /g;s/s2//g" | egrep "(${LUN})" | awk '{ print $1 }' | tr '[:upper:]' '[:lower:]'`
 

3.    Close the Veritas disk access name:
 

# vxdisk rm $LUN


4.    Offline the OS native device paths using luxadm -e offline:


# echo "$OFFLINE" | sh -x
+ luxadm -e offline /dev/rdsk/c3t500601693EA07595d1s2
+ luxadm -e offline /dev/rdsk/c4t500601613EA07595d1s2
+ luxadm -e offline /dev/rdsk/c4t500601693EA07595d1s2
+ luxadm -e offline /dev/rdsk/c3t500601613EA07595d1s2
 

5.    Verify the cfgadm access paths now report an unusable path state
 

# cfgadm -alo show_FCP_dev | egrep "${ACCESS}"
c3::500601613ea07595,1         disk         connected    configured   unusable
c3::500601693ea07595,1         disk         connected    configured   unusable
c4::500601613ea07595,1         disk         connected    configured   unusable
c4::500601693ea07595,1         disk         connected    configured   unusable


6.    Unconfigure the intended unusable access paths


# cfgadm -o unusable_FCP_dev -c unconfigure $CFGADM
 

7.    Verify the access paths have been removed by the previous cfgadm unconfigure operation
 

# cfgadm -alo show_FCP_dev | egrep "${ACCESS}"


8.    Clean-up the stale OS device handles using devfsadm
 

# devfsadm -Cvc disk
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601613EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/dsk/c3t500601693EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601613EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/dsk/c4t500601693EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601613EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/rdsk/c3t500601693EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601613EA07595d1s7
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s0
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s1
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s2
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s3
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s4
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s5
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s6
devfsadm[6761]: verbose: removing file: /dev/rdsk/c4t500601693EA07595d1s7


It is essential that the /dev/rdsk contents do not contain any stale OS device handles , as the Veritas Device Discovery Layer (DDL) will build its DMP structures based on these device handles

 

9.    The ‘vxdisk scandisks” command is responsible for destroying the removed DMPNODE
 

# vxdisk scandisks


10.    Refresh the DDL device tree  and /etc/vx/disk.info file contents


# vxddladm assign names


11.    Confirm the DMPNODE (in this instance emc_clariion0_5) is no longer visible to VxVM
 

# vxdisk list
DEVICE          TYPE            DISK         GROUP        STATUS
emc_clariion0_72 auto:cdsdisk    -            -            online
emc_clariion0_73 auto:cdsdisk    -            -            online
emc_clariion0_75 auto:cdsdisk    -            -            online
emc_clariion0_457 auto:cdsdisk    -            -            online
emc_clariion0_458 auto:cdsdisk    -            -            online
emc_clariion0_489 auto:cdsdisk    -            -            online
emc_clariion0_491 auto:cdsdisk    -            -            online
emc_clariion0_492 auto:cdsdisk    -            -            online
emc_clariion0_493 auto:cdsdisk    -            -            online
vmsolaris01_disk_0 auto:ZFS        -            -            ZFS
vmsolaris01_disk_1 auto:ZFS        -            -            ZFS
vmsolaris01_disk_2 auto:ZFS        -            -            ZFS
vmsolaris01_disk_3 auto:ZFS        -            -            ZFS

 

DMPDR TOOL:

 

The process can be automated using the DMPDR tool:

# /usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh

 

Related Articles

SOLARIS: Path Addition Reconfiguration Guidelines

SOLARIS SPARC: Dynamic LUN addition and removal made easy with 'dmpdr -o refresh'

SOLARIS: LUN Removal Reconfiguration Guidelines

SOLARIS: Path Removal Reconfiguration Guidelines

SOLARIS: LUN Addition Reconfiguration Guidelines

Was this content helpful?