Migrating TimeFinder/VP Snap PiT copies to another VMAX

Recently I had a partner/customer who was migrating a lot of SAP data from one VMAX to another VMAX and they ran into an issue they weren’t sure how to solve, well at least what the best way to solve it was. This person had a ton of data on the VMAX and more than a few TimeFinder/VP Snap point-in-time copies of each SAP volume that they used for testing/recovery or backup.

For those of you unfamiliar with VP Snap it is a rather new (introduced with 5876) method of local replication on the VMAX that leverages the space-efficiency benefits of TimeFinder/Snap but also offers the flexibility of configuration provided by TimeFinder/Clone.

vpsnapSee more information about it here:


Anyways, this person wanted to migrate all of the VP Snap copies from one VMAX to the other so those point-in-time copies could continue to be used as needed. There are a couple different ways to do this but most of them either required software they didn’t have or the target copy would become “thick” once moved to the new array. In other words a source VP Snap copy that only stored differences between the production data and its point-in-time that was only in the 100s of MB or a few GB would be come possibly TBs on the target array. Due to the vast number of VP Snap copies they didn’t have adequate storage to store these full copies. They needed them to remain space efficient on the target.

After thinking about this and discussing it with a colleague in engineering we came up with the following solution that met their needs. It seems somewhat obvious after looking at it, but I thought I’d share it in case it might save someone a bit of time:

  1. A production device with  the activated VP Snaps
  2. Create/establish an RDF session from that device to a device on the new VMAX
  3. Create, but don’t activate,  the VP snap sessions off of the R2 device–the same number that the R1 has plus one.
  4. Wait for RDF session to become “Synchronized” and then activate VP Snap session 1 on R2 for gold copy. This will provide a latest version of the production volume to be restored to after the migration process. Let’s call this the Gold Copy VP Snap.
  5. Restore from VP Snap 1 to the R1 (requires –force). This will make the R1 the point-in-time that the first VP Snap copy stores. This will in turn make the R2 device that same point-in-time.
  6. Wait for RDF session to become “Synchronized” and then activate VP Snap 2 on R2. This will then make the VP Snap 2 on the new array the same point-in-time as the VP Snap just was just restored from. Most importantly it is still space-efficient.
  7. Terminate restore session the VP Snap 1 on the R1 (symclone terminate –restored)
    1. OPTIONAL: Terminate the entire VP Snap session on the R1 by then running symclone terminate. If you no longer want the VP Snap copy on the R1 you can terminate it completely, but I would recommend waiting to do this until the entire process is complete and the R2 data and copies are verified.
  8. Keep repeating steps 5-7 for each VP Snap copy on the R1 side until each point-in-time is on the R2 side.
  9. Perform RDF failover and then restore the R2 device from the Gold Copy VP Snap.
  10. Deletepair for RDF devices, delete R1 devices etc.

This process worked rather well for them, but there are a couple of things to note:

  1. If you are using SRDF/A you will need to enable and arm device write pacing on the R1 device.
  2. THIS IS AN OFFLINE PROCESS. Since the R1 will be changed and restored multiple times, it cannot have a host accessing it for the duration. Conceivably this could be changed to be online in some way, but it will be more difficult to do–it would require some second copy of the data at the R1 site. So depending on your migration needs this might need to be altered.
  3. The amount of data stored by the new VP Snap copies on the new array might be more than the old array. Due to the multiple restores some copies might need to store more data than its respective copy on the R1 site. You might want to look at the change rates on each copy and decide upon the order they should be restored and migrated to give you the best results.
  4. This process should work with standard Snap too, but I haven’t personally tested it. Might be a good way to migrate from Snap to VP Snap (if desired) if moving to a new array as well.




7 Replies to “Migrating TimeFinder/VP Snap PiT copies to another VMAX”

  1. This is awesome stuff, post more stuff like that Cody 🙂

    Question for you: Going to migrate a customer from DMX4 to VMAX and on the VMAX they want to implement 14 “snaps” that will be hanging of an R2 (SRDF/S). I understand that i don’t need a dedicated save pool for TF/VP, there is no COFW ..what else can i present to this customer who is very conservative when it comes to new technology.

  2. Thanks!! I will try to do more of the same!

    There are a couple advantages to VP Snap over Snap, of course the biggest is the one you mention (don’t need a save pool), but also you don’t need a dedicated snap license (clone will do), more efficient cache usage because instead of keeping track of changes per 64KB we do it by the thin extent (768KB) so we use much less cache tracking changes. Also VP Snap can be more space efficient which increases as you make more copies of a given device. Each VP Snap can share tracks, so we will not copy a changed track five times if it changes on the source, we will copy it once and each VP Snap will share it. Furthermore, if they already use clone, they know how to use VP Snap. Unlike Snap which requires the separate knowledge on symsnap, VP Snap uses identical commands to standard clone, the only difference being the very first create command will add -vse, everything after is no different.

  3. Cody,

    can you please clarify Timefinder/SNAP space consumption and multiple snaps. For example if i have 5 snapshots that point to one source and specific block on the source changes ..will all 5 snapshots point to their own copy in save pool or will they all point to the one copy (like Timefinder/VP)


  4. Few questions.
    1)What happens to original R1 data after terminating restore session ? i am assuming the original prod data will remain and the restored data will not be accessible from R1.
    2)So, there is a need for production downtime during this process when we start the restore session of VP snap onto R1. Am i correct?
    3)Should’t there be one last SRDF sync (from R1 to R2) after terminating restored vp snap ?

  5. Cody no longer works for EMC so here are your answers:
    1. The R1 data is still there from the last restore. It is not, however, the production data. It is data of the final VP Snap restore.
    2. As Cody writes in #2 of the notes “THIS IS AN OFFLINE PROCESS.” An R1 cannot be accessed during a restore.
    3. There will always be a “last” sync after the final VP Snap restore, but there is nothing special or different about it than any of the other ones. After it completes and the VP Snap is taken on the R2, a failover is run so that the R2 becomes the production environment and at that point a restore is run from the first gold copy of the R2, returning the environment to the production data.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.