After migration the source in which volume still reports as running even after committing the mobility job

Article Number: 510317                              Article Version:                               Article Type: Break Fix


Product:

VPLEX Series


VPLEX VS2,VPLEX Series

 

Issue:

After migration the source in which volume still reports as running even after committing the mobility job

 

Please note that Migration has 4 steps to complete as mentioned below:

 

The start operation first creates a RAID-1 device on top of the source device. It specifies the

source device as one of its legs and the destination device as the other leg. It then copies

the source device’s data to the destination’s device or extent. This operation can be canceled

as long at it is not committed.

• The commit operation removes the pointer to the source leg. At this point in time the

destination device is the only device accessible through the virtual volume. # we are at this stage as per my understanding

• The clean operation breaks the source device down all the way to the storage volume level.

The storage volume is left unclaimed after this operation. However, the data on the source

device is not deleted.

• The remove operation removes the record of that migration from the list.

 

the Migration typically includes 4 stages 1> Start 2> Commit 3> Clean, 4> remove . This KB refers to what is required at the commit stage .

 

The Migration has successfully completed, the rebuild has finished and the VV has moved from the source leg and been mapped to the new target device and become part of the respective storage-view .

 

At this stage in order to clear the source devices and finish the migration as well as clear the migration entries listed under VPlexcli:/> ll /data-migrations/device-migrations/ we will need to perform the clean and remove operation.

 

 

Cause:

After migration , the customer did not issue dm migration clean command to break the source devices into unclaimed storage volume level and remove.

The clean operation breaks the source device down all the way to the storage volume level.

The storage volume is left unclaimed after this operation.

The Remove operation removes the record of cancelled or committed data migrations.

 

Resolution:

  • From the VPlexcli make sure that all the migration processes are completed by listing the migration jobs :

  VPlexcli:/> ll /data-migrations/device-migrations/
D__LUN119_1_80_04B9_1 device_Map_2939_LUN119_1 cluster-1 device_Symm0480_04B9_1 cluster-1 full committed 128K 100

  • Perform a show-use-hierarchy on the source and the target leg.

  VPlexcli:/> show-use-hierarchy --targets /clusters/cluster-1/devices/device_Map_2939_LUN119_1
local-device: device_Map_2939_LUN119_1 (31.8G, raid-0, cluster-1)
extent: extent_Map_2939_LUN119_1 (31.8G)
storage-volume: Map_2939_LUN119 (31.8G)
logical-unit: VPD83T3:600601xxxxxxxe00783a45d92403e211
storage-array: EMC-CLARiiON-xxxxxxxx


VPlexcli:/> show-use-hierarchy --targets /clusters/cluster-1/devices/device_Symm0480_04B9_1
storage-view: GOXSD687_SV (cluster-1)
virtual-volume: device_VNX2939_LUN119_vol (31.8G, local @ cluster-1, running, expandable by 1.88M)
local-device: device_Symm0480_04B9_1 (31.8G, raid-0, cluster-1)
extent: extent_Symm0480_04B9_1 (31.8G)
storage-volume: Symm0480_04B9 (31.8G)
logical-unit: VPD83T3:600009xxxxxxx7200480533030344239
storage-array: EMC-SYMMETRIX-xxxxxxx

  • Clean the migration job to tear down the source device to an unclaimed storage volume state.

   Ex: VPlexcli:/data-migrations/device-migrations> dm migration clean --force --migrations migrate_xxx
Note: migrate_xxx is the migration job name.

  • Remove the Migration job from the migration context

   EX: VPlexcli:/data-migrations/device-migrations> remove -m migrate_xxx –f
  Removed 1 data migration(s) out of 1 requested migration(s).

 

Notes:

From The logs: How to identify the migration job details:

1> Migration start in the firmware logs:

2017-09-22 11:17:29,595 dm migration start --force --from /clusters/cluster-1/devices/device_Map_2939_LUN119_1 --to /clusters/cluster-1/devices/device_Symm0480_04B9_1 --name "D__LUN119_1_80_04B9_1"--transfer-size 128KB

2017-09-22 11:17:46,284 dm migration start --force --from /clusters/cluster-1/devices/device_Map_2939_LUN141_1 --to /clusters/cluster-1/devices/device_Symm0480_04CE_1 --name "D__LUN141_1_80_04CE_1"--transfer-size 128KB


2> Ongoing migration :

128.221.252.35/cpu0/log:5988:W/"00601661756316450-2":1222705:<6>2017/09/22 11:17:37.83: amf/171 inserted amf "MIGRATE_D__LUN119_1_80_04B9_1" type raid1 above amf "device_Map_2939_LUN119_1"
128.221.253.40/cpu0/log:5988:W/"006016618406161830-2":2611548:<6>2017/09/22 11:17:37.83: amf/171 inserted amf "MIGRATE_D__LUN119_1_80_04B9_1" type raid1 above amf "device_Map_2939_LUN119_1"
128.221.252.37/cpu0/log:5988:W/"0060166175bf16446-2":1714056:<6>2017/09/22 11:17:37.83: amf/171 inserted amf "MIGRATE_D__LUN119_1_80_04B9_1" type raid1 above amf "device_Map_2939_LUN119_1"


3> Rebuilds start and end :

128.221.253.41/cpu0/log:5988:W/"00601661840e16446-2":2430123:<5>2017/09/22 11:17:46.23: amf/20 raid 1 rebuild: MIGRATE_D__LUN119_1_80_04B9_1: started rebuilding child node(s) (full rebuild)
128.221.253.41/cpu0/log:5988:W/"00601661840e16446-2":2430125:<5>2017/09/22 11:17:46.24: amf/21 raid 1 rebuild: MIGRATE_D__LUN119_1_80_04B9_1: child node 1 (device_Symm0480_04B9_1) rebuild started (full rebuild, rebuild line 0 blocks)
128.221.252.41/cpu0/log:5988:W/"00601661840e16446-2":2430198:<5>2017/09/22 11:42:22.67: amf/22 raid 1 rebuild: MIGRATE_D__LUN119_1_80_04B9_1: child node 1 (device_Symm0480_04B9_1) rebuild successfully completed (full rebuild)
128.221.252.41/cpu0/log:5988:W/"00601661840e16446-2":2430199:<5>2017/09/22 11:42:22.67: amf/24 raid 1 rebuild: MIGRATE_D__LUN119_1_80_04B9_1: finished rebuilding child node(s) (full rebuild)


Note: The virtual-volume name does not get changed on its own in a device level migration , For administrative purposes the name of the virtual volume can be renamed after the migration using the below commands:

1> Traverse to the path of vitual-volume
VPlexcli:/> cd clusters/cluster-1/virtual-volumes/
2>VPlexcli:/clusters/cluster-1/virtual-volumes> cd virtualvolumetest
3>VPlexcli:/clusters/cluster-1/virtual-volumes/virtualvolumetest> set name virtualvolumetest1

p