Creating ScaleIO Snapshots in a VMware environment

Let’s talk about snapshots and ScaleIO.

First how does snapshot-ing work with ScaleIO? ScaleIO offers the ability to snapshot a single volume at a time or cloning multiple volumes at once. Importantly, ScaleIO, when snapshot-ing multiple volumes at once, those copies will be consistent with each other at the time of creation. A consistency group is created when a snapshot command includes multiple volumes–this is helpful for situations where multiple VMs on multiple volumes need consistent copies but the source applications may not be able to be quiesced at the time. ScaleIO does not though prevent you from deleting one snapshot in a consistency group–you may manage those volumes after the fact however you wish.

snap_example

ScaleIO also allows you to take multiple copies of a given volume providing for multiple point-in-time copies. Furthermore, a user may take snapshots of snapshots. It is important to note, that unlike standard volumes, snapshots are thin-provisioned. This means that the full copy of the data is not copied over upon creation, so the storage pool associated with the snapshot only stores the deltas–just like most software-based or array-based snapshot  technology. On that note snapshots are always in the same pool as the source volume–so make sure you have adequate capacity.

After creation, a snapshot is handled just like a normal volume–it must be mapped to a SDC and then a SCSI initiator. The only difference is how snapshot removal is handled–there are couple of different ways to do this. Let’s take the following image:

clone_tree

The remove volume command has three options:

  1. remove_entire_snapshot_tree–This will remove the entire V(olume)Tree to which a volume belongs. A VTree is the family of associated snapshots from a given source.
  2. remove_with_descendant_snapshots–This will remove the segment of the tree rooted at the volume specified (which may be a snapshot) including the root
  3. remove_descendant_snapshots_only–This will remove the segment of the tree rooted at the volume specified (which may be a snapshot), but without the root.

So if we use the above image as our guide and we remove Snapshot 1, the following would occur with the different switches:

  1. remove_entire_snapshot_tree: All snapshots of volumes 1 will be removed (snapshot 1, 2 and 11)
  2. remove_with_descendant_snapshots: Snapshot 1 and 11 will be removed
  3. remove_descendant_snapshots_only: Only Snapshot 11 will be removed

Time to run through the process.

First we need to identify the ScaleIO volumes that I want to snapshot. In this case I have two ScaleIO VMFS volumes that I want to take a consistent snapshot of. So I need to find out the ScaleIO names first. I can use my handy PowerShell script I wrote about in a recent blog (https://www.codyhosterman.com/2014/01/07/using-powercli-to-correlate-vmware-vmfs-and-scaleio-volume-info/) to do this.

vmfs_sources

Then use the script to find their names:

Untitled

So the two datastores are residing on ScaleIO volumes named scio-vol1 and scio-vol2 respectively. These are the names I will use for the snapshot operation.

The snapshot command is pretty straight forward:

scli --snapshot_volume (--volume_id <ID>| --volume_name  <NAME>) [--snapshot_name <NAME>]

Just need the name of the original volume(s) and choose new names for the snapshots. For multiple volumes, just make both names comma separated.

SSH into a SCIO host with SCLI, I will use my primary MDM and snapshot the two volumes.

create_snaps

This creates the volumes and since I snapshotted two, there is a consistency group created with a supplied ID. I can use the ID to manage all of the volumes in the consistency group if I so choose.

If we look at the GUI, the snapshots are immediately recorded.

gui_snaps

Now just to map them to the SDCs and then the ESXi initiators.

map_snaps

Now rescan the ESXi hosts and run the add storage wizard. You will see the two ScaleIO volumes appear and they will be listed as hosting a VMFS volumes of the same name as the source volumes.

add_storage

Since the original volumes are still present we need to assign these VMFS volumes a new signature (which is a good idea regardless) so they can be mounted. Resignature them and you will see them appear as mounted datastores with snap- prefixes applied to the original VMFS names. And we’re done!

resignature

resignaturetasks

mounted_snaps

Once again, pretty easy. Since they are sharing tracks with the original source you will not notice any increased storage consumption going on behind the scenes for these volumes until you start writing to them.

3 Replies to “Creating ScaleIO Snapshots in a VMware environment”

  1. Is there any way to clone the volume for migration purpose or while doing an update in production and anything goes wrong it can used for restore

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.