What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP

vSphere 6.7 core storage “what’s new” series:

VMware has continued to improve and refine automatic UNMAP in vSphere 6.7. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted.

Continue reading “What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP”

VMware Capacity Reporting Part V: VVols and UNMAP

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts.

Continue reading “VMware Capacity Reporting Part V: VVols and UNMAP”

Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading “Monitoring Automatic VMFS-6 UNMAP in ESXi”

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading “In-Guest UNMAP, EnableBlockDelete and VMFS-6”

Unattended VMFS UNMAP Script

I updated my UNMAP PowerCLI script a month or so ago and improved quite a few things–but I did remove hard-coded variables and replaced it with interactive input. Which is fine for some, but for many it was not.

Note: Move to VMFS-6 in vSphere 6.5 and you don’t have to worry about this UNMAP business anymore 🙂

Essentially, quite a few people want to run it as a scheduled task in Windows, and if it requires input that just isn’t going to work out of the box. So I have created an unattended version of the script. For details read on.

Note: I will continue to update the script (bugs, features, etc.) but will note them on my other blog post about the script here:

Pure Storage FlashArray UNMAP PowerCLI Script for VMware ESXi

I will only update this post if the unattended version changes in a way that makes these instructions wrong. Continue reading “Unattended VMFS UNMAP Script”

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

I posted about In-Guest UNMAP with Linux VMs in this post:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux”

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.

First off, some official information:

Release notes:

https://kb.vmware.com/kb/2148989

Manual patch download:

https://my.vmware.com/group/vmware/patch#search

Or you can run esxcli if you ESXi host has internet access to download and install automatically:

esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows”

Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading “Issue with Manual VMFS-6 UNMAP and Block Count”

Detecting what FlashArray VMFS Volumes Have Dead Space

Another UNMAP post, are you shocked? A common question that came up was what volumes have dead space? What datastores should I run UNMAP on?

My usual response was, well it is hard to say. Dead space is introduced when you move a VM or you delete one. The array will not release the space until you either delete the physical volume, overwrite it, or issue UNMAP. Until vSphere 6.5, UNMAP for VMFS was not automatic. You had to run a CLI command to do it. So that leads back to the question, well I have 100 datastores, which ones should I run it on?

So to find out, you need to know two things:

  1. How much space the file system reports as currently being used.
  2. How much space the array is physically storing for the volume hosting that file system.

Continue reading “Detecting what FlashArray VMFS Volumes Have Dead Space”

Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware

I posted shortly after ESXi 6.0 came out a while back explaining how to do in-guest UNMAP with Windows. See the original post here:

Direct Guest OS UNMAP in vSphere 6.0

The high-level workflow if you don’t want to read the post is:

  1. You delete a file in Windows
  2. Run Disk Optimizer to reclaim the space
  3. Windows issues UNMAP to the filesystem
  4. ESXi shrinks the virtual disk
  5. If EnableBlockDelete is enabled on the ESXi hosts, ESXi will issue UNMAP to reclaim the space on the array

This had a few requirements:

  • ESXi 6.0+
  • VM hardware version 11+
  • Thin virtual disk
  • CBT cannot be enabled (though this restriction is removed in ESXi 6.5 see this post)

Continue reading “Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware”