The process you're describing would be appropriate for a hot differential push, or for other migration methods like SRDF or host/based file copy (e.g. rsync). But Open Replicator hot pull works differently.
With hot pull, you shutdown your production apps, then create & activate the open replicator sessions. Immediately upon activating the sessions, the control devices (in this case, your target VMAX devices) are in a read/write state. You can bring your apps back online using the VMAX devices immediately after session activation -- before data has been copied. Open Replicator uses CopyOnFirstAccess here -- if the host requests a block that hasn't yet been copied from the VNX to the VMAX, then Open Replicator will reach back to the VNX, retrieve & copy the block, and return it to the host. All of this happens transparently to the host and its applications.
The end result of all this is that your migration cutover outage window actually ends up being shorter than the traditional push or SRDF methods of migration -- because you don't have to wait for a differential copy to complete before cutting over your hosts.
I'd highly suggest reading through the Solutions Enabler Migration CLI guide -- it covers hot pull technology and provides some procedural examples of how a hot pull migration works. I've just provided a high level overview here.
Hope that helps,
BTW, one other point -- The process you described would be applicable if you performed the migration with Incremental SAN Copy push... in which case, you'd perform a full SAN Copy with apps online on the VNX, then periodic incremental SAN Copy sessions to keep things in sync before the outage window. Then during the outage window, you'd shutdown apps, perform a final incremental SAN Copy, and bring apps back up on the VMAX.
we just migrated most of our environmnet with OR hotpull (20K to 40K). It just requires a quick down time to replace source luns with target luns and kick off the OR session then bring the host up on the target luns while the data copies in the back ground.
You must zone the FAs of the VNX and VMAX together, then you must mask the SG of VNX you want to migrate to VMAX then from vmax you create the same size or bigger devices and define a txt file matching source to target. We chose to use donor update which writes changes back to source. 25 is a good number for ceiling to start with. Watch your masked FAs on VNX and don't let them go past 75%.
**** Create Open Replicator Sessions (choose one to set ceiling) ****
symrcopy set ceiling 10 -dir all -sid 1757 -noprompt
symrcopy set ceiling 25 -dir all -sid 1757 -noprompt
symrcopy set ceiling 50 -dir all -sid 1757 -noprompt
**** drop host and remove DEV/RDM ****
symrcopy create -copy -name cguschp3012vm -pull -hot -frontend_zero -donor_update -file cguschp3012vm.txt -nop
symrcopy -file cguschp3012vm.txt activate -nop
symrcopy -file cguschp3012vm.txt query -i 15
**** verify all luns in a copied state ****
symrcopy verify -file cguschp3012vm.txt
symrcopy set donor_update off -session_name cguschp3012vm -consistent -nop
symrcopy -file cguschp3012vm.txt terminate -nop