Please enter search query.
 
              Search <product_name> all support & community content...
            
 
          
               Article: 100023745
              
              
                Last Published: 2015-09-14
              
              
                Ratings:  5 0
              
            
                Product(s): InfoScale & Storage Foundation
              
            Problem
How to use mirroring to migrate data from an old storage array to a new array
Solution
  The procedure below creates a mirror of a specified data volume on one or more LUNs of the new array.  In other words, the existing plex of the volume that uses LUNs in the old array will be copied (mirrored) to a new plex on the new array.  The creation of the mirror plex allows for copying the data in real-time to the disks of the new array while leaving the old disks intact.  This procedure is presented in the SUMMARY section and DETAILS section below.  Please choose one set of instructions as appropriate.  Read the BACKGROUND section below for additional information.  While it is unlikely that it will be needed, a good verified backup should be made prior to starting this procedure.
 
 
BACKGROUND
 
o Using a mirror for data migration is considered BEST PRACTICE as:
o Data on old disks is retained for backout purposes
o Data on the new disks and old disks will be updated simultaneously for a more updated backout plan
o The volume will remain online with current data (old array plex) in the event of a premature failure of the new array
o The old disks can be retained and continuously updated until the new storage has proven reliable
o There is full control as to when the cutover to the new media is performed on a volume by volume basis
o All phases of this procedure can be done online without any interruption to production
 
o This procedure can be run online but understand that the performance consideration of creating a mirror is roughly equivalent to a
 
o Additional online updates will increase the amount of write operations to the mirror plex as well as the write to the original plex (increasing overall time to completion)
 
o The specification
 
o In the procedure below,
 
SUMMARY
 
1) Install new disks and bring into the Volume Manager "world" using
1a) Prior to running
o install the appropriate Array Support Library/Array Policy Module (ASL/APM) if applicable
o use OS commands to configure the disks if appropriate (ie ...
o each LUN should have a valid OS label
(do not proceed if the status is other than "
 
BACKGROUND
o Using a mirror for data migration is considered BEST PRACTICE as:
o Data on old disks is retained for backout purposes
o Data on the new disks and old disks will be updated simultaneously for a more updated backout plan
o The volume will remain online with current data (old array plex) in the event of a premature failure of the new array
o The old disks can be retained and continuously updated until the new storage has proven reliable
o There is full control as to when the cutover to the new media is performed on a volume by volume basis
o All phases of this procedure can be done online without any interruption to production
o This procedure can be run online but understand that the performance consideration of creating a mirror is roughly equivalent to a
 dd  of the entire volume.
 o Additional online updates will increase the amount of write operations to the mirror plex as well as the write to the original plex (increasing overall time to completion)
o The specification
 <OS_device>  used below should be replaced with the OS designation of a drive (ie ... Solaris: c#t#d#s2 or World Wide Number); <da_name> should be replaced with the Disk Access name (1'st column of 
  vxdisk list  output), <dm_name> should be replaced with the Disk Media name (3'rd column of 
  vxdisk list  output).  The 
  <da_name>  may be the same or similar to the OS designation while the 
  <dm_name>  is an optional virtual name that can be assigned to the disk when it is added to a diskgroup.
 o In the procedure below,
 NewDisk<nn>  (ie ... 
  NewDisk01 NewDisk02 NewDisk03 ) is the disk media/virtual name given to the new disks.  Any virtual name may be used (including the default name).  The 
  vxedit -g diskgroup rename old_object new_object command can be used to rename an object if needed.
 SUMMARY
1) Install new disks and bring into the Volume Manager "world" using
 vxdctl enable 
 1a) Prior to running
 vxdctl enable  to scan the LUNs
 o install the appropriate Array Support Library/Array Policy Module (ASL/APM) if applicable
o use OS commands to configure the disks if appropriate (ie ...
 luxadm  in Solaris)
 o each LUN should have a valid OS label
(do not proceed if the status is other than "
 online invalid "; "
  error " in version 3.5 - last column of vxdisk list output) 
2) Initialize the new disks (
 vxdisksetup -i <da_name> )
 o 4.x and above defaults to a format type of
 cdsdisk ; be sure to specify "
  format=sliced " at the end of the 
  vxdisksetup command line if existing disks have a type of 
  sliced ; 2'nd column of
  vxdisk list  output 
  3)  Add initialized LUNs to the diskgroup (
 
 
 vxdg -g <diskgroup> adddisk NewDisk<nn>=<da_name> ) 
  4)  Create mirror plex on the new disks (
 
(command use: the specification
o wait for state of new plex to go "
 
 vxassist -g <diskgroup> mirror <volume> NewDisk01 NewDisk02 Newdisk03 ... NewDisk<nn> )
 (command use: the specification
 NewDisk<nn>  can be specified one or more times as needed and represents the dm name(s) where the mirror plex will be created)
 o wait for state of new plex to go "
 ENABLED ACTIVE " before proceeding; the mirroring task can be monitored with the command 
  vxtask list ) 
  5)  Disassociate the original plex; this will stop I/O to the original plex/disks but not to the new plex/disks (
 
o this is a good opportunity to perform a "sanity" check of the data to verify that the mirror process was completed on the intended data
 
 vxplex -g <diskgroup> dis <original_plex> )
 o this is a good opportunity to perform a "sanity" check of the data to verify that the mirror process was completed on the intended data
  6)  Prior to removing the old disks/array
 
6A) Remove the original plex (
6B) Remove the original disks from the disk group (
6C) Uninitialize the original disks (
 
After the array is removed (using Operating System commands if appropriate), run vxdctl enable to refresh the list of LUNs that Volume Manager knows about.
 
End of SUMMARY instructions.
 
DETAILS
 
1) Install new disks and bring into the Volume Manager "world" (
A) Use the appropriate OS/hardware procedures to install the drive(s) if appropriate
(ie ... luxadm for fiber channel drives in Solaris)
B) Use OS commands to verify that the drives are recognized properly
o ie ...
C) Use OS commands on each device to verify the existence of a valid label
o ie ...
(there should not be any slices with tags of 14 and/or 15)
D) Install the appropriate Array Support Library/Array Policy Module (ASL/APM) if applicable
E) Run
F) Verify that the new disks are seen in
(do not proceed if the status is other than "
 
2) Initialize the new disks (
A) Run the command
(
o 4.x and above defaults to a format type of cdsdisk; be sure to specify "format=sliced" at the end of the vxdisksetup command line if existing disks have a type of sliced; 2'nd column of vxdisk list output
 
3) Add initialized LUNs to the diskgroup (
A) Use the command
(command use: the specification NewDisk<nn> can be specified one or more times as needed)
(the output of
 
4) Create mirror plex on the new disks
A)
(command use: the specification NewDisk<nn> can be specified one or more times as needed and represents the dm name(s) where the mirror plex will be created)
o wait for state of new plex to go "ENABLED ACTIVE" before proceeding; the mirroring task can be monitored with the command
 
6A) Remove the original plex (
 vxedit -g <diskgroup> rm <original_plex> )
 6B) Remove the original disks from the disk group (
 vxdg -g <diskgroup> rmdisk <dm_name> )
 6C) Uninitialize the original disks (
 vxdiskunsetup <da_name> )
 After the array is removed (using Operating System commands if appropriate), run vxdctl enable to refresh the list of LUNs that Volume Manager knows about.
End of SUMMARY instructions.
DETAILS
1) Install new disks and bring into the Volume Manager "world" (
 vxdctl enable )
 A) Use the appropriate OS/hardware procedures to install the drive(s) if appropriate
(ie ... luxadm for fiber channel drives in Solaris)
B) Use OS commands to verify that the drives are recognized properly
o ie ...
 echo|format  in Solaris
 C) Use OS commands on each device to verify the existence of a valid label
o ie ...
 prtvtoc /dev/rdsk/<OS_device>s2 
 (there should not be any slices with tags of 14 and/or 15)
D) Install the appropriate Array Support Library/Array Policy Module (ASL/APM) if applicable
E) Run
 vxdctl enable  to rescan all disks (this will not interrupt I/O to any volume)
 F) Verify that the new disks are seen in
 vxdisk list  with a status of 
  online invalid  or 
  error 
 (do not proceed if the status is other than "
 online invalid "; "
  error " in version 3.5)
 2) Initialize the new disks (
 vxdisksetup -i <da_name> )
 A) Run the command
 vxdisksetup -i <da_name>  on each of the new LUNs
 (
 vxdisk list  will now show each LUN with a status of 
  online )
 o 4.x and above defaults to a format type of cdsdisk; be sure to specify "format=sliced" at the end of the vxdisksetup command line if existing disks have a type of sliced; 2'nd column of vxdisk list output
3) Add initialized LUNs to the diskgroup (
 vxdg -g <diskgroup> adddisk NewDisk<nn>=<da_name> )
 A) Use the command
 vxdg -g <diskgroup> adddisk NewDisk<nn>=<da_name> NewDisk<nn>=<da_name> ... NewDisk<nn>=<da_name> 
 (command use: the specification NewDisk<nn> can be specified one or more times as needed)
(the output of
 vxdisk list  or 
  vxprint -htg <diskgroup>  will now show each disk in the diskgroup with a disk media/virtual name of 
  NewDisk<nn> )
 4) Create mirror plex on the new disks
 (vxassist -g <diskgroup> mirror <volume> NewDisk01 NewDisk02 Newdisk03 ... NewDisk<nn> )
 A)
 vxassist -g <diskgroup> mirror <volume> NewDisk01 NewDisk02 NewDisk03 ... NewDisk<nn> )
 (command use: the specification NewDisk<nn> can be specified one or more times as needed and represents the dm name(s) where the mirror plex will be created)
o wait for state of new plex to go "ENABLED ACTIVE" before proceeding; the mirroring task can be monitored with the command
 vxtask list ) 
5) Disassociate the original plex (
 vxplex -g <diskgroup> dis <original_plex> )
 A)
 vxplex -g <diskgroup> dis <original_plex> 
 (where
 <original_plex>  is the name of the plex in the output of 
  vxvol <volume> ; 
  vxprint output line starting with 
  pl )
 (example original plex:
 pl fwlogvol-01  fwlogvol     ENABLED  ACTIVE   281018880 CONCAT   -RW )
 o this is a good opportunity to perform a "sanity" check of the data to verify that the mirror process was completed on the intended data
6) Prior to removing the old disks/array
6A) Remove the original plex
1) Run the command
 vxedit -g <diskgroup> -r rm <original_plex> 
 6B) Remove the original disks from the disk group
1) Run the command
 vxdg -g <diskgroup> rmdisk <dm_name>  on each of the original drives
 6C) Uninitialize the original disks
1) Run the command
 vxdiskunsetup -C <device>  on each of the original drives
 After the array is removed (using Operating System commands if appropriate), run
 vxdctl enable  to refresh the list of LUNs that Volume Manager knows about.
 End of DETAILS instructions.