Reclaiming in-guest capacity with VMware and Pure Storage

UPDATE: In-guest UNMAP is now supported in a VM and sDelete and such is no longer required. Please refer to these posts:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

Direct Guest OS UNMAP in vSphere 6.0

Reclaiming “dirty” or “dead” space is a topic that goes by my desk quite often these days–since the FlashArray is a data reduction array it is especially important that space is not wasted on the array–throws off the economics etc. Therefore UNMAP is an important VAAI feature to leverage in any AFA environment. Supporting UNMAP is definitely table stakes for AFAs.

Note–I am doing to use the terms “dead”, “dirty” and “stranded” to define space that needs to be reclaimed interchangeably. So anyways…

Unfortunately UNMAP in its current form does not satisfy all of the reclamation use cases. UNMAP will only reclaim space on any array when capacity is cleared from the VMFS volume–so when a VM (or virtual disk) is deleted or migrated elsewhere. It does not have the ability to reclaim space when data is “deleted” inside a virtual machine by the guest OS when using virtual disks. VMware does not know this capacity has been cleared and neither can the array. So until this virtual disk is deleted or moved the capacity cannot be reclaimed with UNMAP. So to be clear, UNMAP with vmkfstools (in ESXi 5.0/5.1) or esxcli (in ESXi 5.5) does not allow you to reclaim space that remains stranded inside of virtual disks.

So what are the options? I see three (a fourth in the future):

  1. Use Raw Device Mappings instead of virtual disks. The SCSI layer is not virtualized by ESXi when using RDMs, so guest OSes that natively support UNMAP (Windows 2012 for example) can directly reclaim space from a RDM. Obviously this is not a great option–you lose a ton of functionality from what VMware can offer when using virtual disks. So let’s move on.
  2. Using a zeroing tool subsequent to deleting files inside a virtual machine on a virtual disk. This will leverage the data reduction technology on an AFA and essentially reclaim the data. More on this in a bit. Certainly a better option than RDMs but there are still a few disadvantages.
  3. SE Sparse. SE Sparse virtual disks allow you to leverage VMware tools and UNMAP to reclaim space from inside the virtual disk. In short it scans the filesystem, re-arranges the data so that the “dirty” space is at the “tail” of the virtual disk and then it shrinks the virtual disk on the VMFS volume so it no longer includes that space. It then runs UNMAP to reclaim it from the array. Sounds great! Unfortunately a few caveats–only officially supported in VMware View. While it is possible to use it with non-View VMs via CMD line utilities VMware does not officially support it without View being present and controlling the VMs. Also I believe there are limits to what Guest OS types can leverage this virtual disk format.
  4. In the future…VVols! With VVols you will get the best of both worlds of RDMs and virtual disks and all of the above nonsense will no longer apply. Guest OSes that support UNMAP will be able to reclaim space directly in VVols and furthermore since VMFS itself will be gone there is no need to reclaim space from deleted or migrated VMs as esxcli/vmkfstools does today since VVol deletion/migration operations will take care of that automatically.

Since we live in today and not the future let’s take a look at option 2 and also 3 for the fun of it. Next year we can look at VVols.

Using Built-in Data Reduction to Reclaim In-Guest Space

56130761
For those Big Lebowski and VMware fans out there

 

The simplest (and importantly supported) mechanism is to simply take advantage of the data reduction technology to kind of “fool” the FlashArray into returning the space. The simplest way to do this is to overwrite dead space with zeroes since the FlashArray will never write these zeroes onto the array. Once you delete your files run a utility that zeroes out free space on the given virtual disk–like sDelete for Windows or DD in Linux.

For example here I used Windows 2012 R2 and sDelete. Next I needed to put some actual data on it–I  went the route of using VDBench–it is one of the quickest ways to write a bunch of data and being able to control how much data and how reducible it is–I went with a compression ratio of 3:1 and a deduplication ratio of 2:1 which ends up achieving an overall reduction of a bit more than 6:1. I wrote five different files of variable sizes which amounted to about 6.1 GB of data which and about 0.95 GB of data actually written on the array.

sdeletefiles sdeletegui_before

The next step is to delete the files and run sDelete. Pretty straight forward process, delete the files and run sDelete (with the -z tag) to zero out the contents. This can take some time (many hours) depending on how large the volume is. Now that the previous space is being zeroed out, the FlashArray will see the all-zero pattern coming in but instead of storing the zeroes it will just create metadata references to the fact that these segments should be all zero and discard the writes–this will in effect release this physical capacity on the array and the capacity usage will drop down to the original level prior to the file creation.

zero_aftergui

You can see from the FlashArray GUI that the volume has returned to its original written capacity after the zeroes.

If you want to “remove” the zeroes from the virtual disk so the logical capacity is returned to the VMFS you can do so with vmkfstools -punchzero but I don’t really see much of a benefit to this frankly since the zeroes aren’t actually written especially since it is an offline operation to punch them out.

Reclaiming In-Guest Space with SE Sparse Virtual Disks

***I want to re-iterate–this method is NOT officially supported by VMware–this is for demonstration purposes only and is not to be used in production environments.***

I won’t spend any more time on explaining what SE Sparse Disks are (@cormacjhogan did a fine job of that here) so let’s look on how to deploy them. Like I mentioned before, the creation of SE Sparse virtual disks is not supported in the vSphere Web Client (well more than not supported–it just isn’t there). Therefore you need to use the CLI–the trusty vmkfstools. Thanks to @lamw for the information on that here. In short though SSH into an ESXi hosts that has access to the target VMFS that you would like to store the new virtual disk. Either CD into the proper VMFS/folder where it should reside or just include the full path in the command.

 vmkfstools -c 250g -d sesparse SnapVM1.vmdk

-c being the size of the virtual disk and also give the virtual disk a name that makes sense. Note that this process does not associate the virtual disk with a virtual machine–you need to do this yourself. This can be achieved via CLI but it is just as simple to use the Web Client and choose to add an existing virtual disk and then navigate to the VMDK you just created.

addexisting

If you look at the settings of the virtual disk now you will see it listed as a Flex SE type virtual disk which is just another name for SE Sparse.

flexse

In this case I added the virtual disk to a Windows 2012 R2 virtual machine and formatted the new volumes as NTFS. Next I needed to put some actual data on it–I once again went the route of using VDBench–and again I went with a compression ratio of 3:1 and a deduplication ratio of 2:1 which ends up achieving an overall reduction of about 6:1. I wrote five different files of variable sizes which amounted to about 4.2 GB of data and 0.7 GB of data actually written on the array. See the below image of the files created in Windows and then the next image of the written capacity over time on the target volume in the FlashArray GUI:

windowsfiles

 

dataadded

Now to delete the files (make sure you either do a permanent delete (shift+delete) or to also empty the recycling bin or the files will not be completely gone.

permanentdelete

Now to reclaim this space. This process has two steps–wipe and shrink. Between these two steps VMware tools will scan the filesystem, mark dead space and then shrink the VMDK so it no longer contains the dead space and will issue SCSI UNMAP to reclaim it on the array.

These steps are also not included in the Web Client or CLI (as far as I am aware). You must use the private APIs in the vSphere Managed Object Browser (MOB)–thanks once again to @lamw for his post describing this process here. This time I will go over those steps in detail–definitely still check out his post though.

While we will be using private APIs we don’t have to actually really create anything novel to use them–it is built into a web interface of vCenter. Using your favorite internet browser, just navigate using https://<your vCenter>/mob. No special port etc. Mine was https://10.21.8.120/mob/. Enter your vCenter credentials and move on.

So the first thing we need to do is find the Managed Object Reference (MORef) ID of your virtual machine–there are a variety of ways to do this. PowerCLI, if handy, is probably the simplest method to get this information:

powerclimoref

Alternatively you can use the following process to find it (you just need the datacenter name, the datastore name it is on and the name of the VM):

From the first page click:

  1. “Content” in properties mobcontent
  2. Then the datacenters ID in the rootfolder row mobdatacenters
  3. Then the datacenter ID of the datacenter hosting your VMdatacentermob
  4. Then the datastore ID of a datastore that hosts any file of your target VM–the datastore name will be next to it so you do not need to know the ID prior datastoremob
  5. Lastly a list of the associated VMs will appear with their name and their MoRef, identify your VM and note the MORef/ID. In my case the VM is called SnapVM and my reference is vm-453vmsmob
  6. Now we have the information we need to run the “wipe” operation

Using your vCenter IP (or name) and then the VM MORef you can start the wipe by entering in the URL as below:

https://10.21.8.120/mob/?moid=vm-453&method=wipeDisk

Then click “Invoke Method” to start the process.

wipe2

You can monitor the process in the vSphere Web Client as seen below.

wipe

This process is not quick, so find something else to do while you wait. For my particular virtual disk that was 250 GB it took over two hours and I imagine it can easily take longer. Also note the ability to edit the settings of your target VM will be locked through the duration of the wipe operation.

Once this finishes you can start the “shrink” operation in the same manner.

https://10.21.8.120/mob/?moid=vm-453&method=shrinkDisk

Go ahead and click “Invoke Method” here to start the shrink. You will see the task start in the Web Client.

shrinkstart

 

Also not a fast operation. Took ~45 minutes in my case.

As the process moves you will see the virtual disk actively shrink in the datastore browser:

vmfsspacedown

The shrink process took about 45 minutes for this virtual disk and the final VMDK file size went down to 1.3 GB from just under 5 GB–so it wasn’t perfect but certainly a lot better than nothing. Essentially all of the space on the array was returned though instead of 5.75 GB physically written to the array it went back down to the almost the previous number. Was initially 5.08 GB prior to the VDBench run to create the files and after the shrink went back down to 5.10 GB which effectively returned 99% of the capacity that was written to the array–which is pretty damn good I’d say! You can watch esxtop to see the UNMAP commands go through–it isn’t very exciting though the UNMAP cmd count only went up by a dozen or so, but it is a good way to see that UNMAP is indeed occurring other than just watching the written capacity drop on the array.

finalvmdksize

returned

The second image of the FlashArray GUI shows the written capacity of the volume hosting the VMFS, you can see when the data was initially written and then on the far right you can see when it was returned (the darker color in the gap was due to me taking a snapshot then deleting it) and it is down back capacity wise to where it was in the first place. Sweet.

In conclusion, yep…looking forward to VVols.

2 Replies to “Reclaiming in-guest capacity with VMware and Pure Storage”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.