What’s new in ESXi 6.5 Storage Part I: UNMAP

So as you might be aware, vSphere 6.5 just went GA.

There is quite a bit of new stuff in this release and there have certainly been quite a few blogs concerning the flagship features. I want to take some time to dive into some new core storage features that might be somewhat less heralded. Let’s start with my favorite topic. UNMAP.

There are a couple of enhancements in space reclamation in ESXi 6.5:

  1. Automatic UNMAP. This is the big one.
  2. Linux-based In-Guest UNMAP support

Automatic UNMAP

This is something that was once in existence back in ESXi 5.0, but was withdrawn for a variety of reasons. It is finally back! Yay!

So let’s talk about what it requires:

  • ESXi 6.5+
  • vCenter 6.5+
  • VMFS 6
  • An array that supports UNMAP

So first, let’s take a look at turning it on. The feature can be seen on the “Configure” tab on the datastore object in vCenter:

vmfsinfo

When you click the “Edit” button, you can turn UNMAP on or off. The default is on.

configureunmap

You can also configure it with esxcli:

setunmap

Now there is a bit of confusion with the settings here. The almost GA version I have still allows you to set UNMAP to low, medium, and high. My sources at VMware tell me that only low is currently implemented in the kernel, so changing this to anything but none or low will not affect anything at this time. So keep an eye on this. I need to officially confirm this.

Now the important thing to note is that this UNMAP is asynchronous UNMAP. Meaning space is not reclaimed as soon as you kick off the delete. If you want immediate results, your handy esxcli storage vmfs unmap command will do the trick. Instead they have introduced a crawler, that runs on each ESXi hosts that on certain intervals will run reclamation on datastores to get space back. This is not immediate in any sense. Expect to see results within a day or so. The nice thing is that, dead space will be taken care of–you do not need to worry about it any more from a VMFS perspective.

So how can I tell if it is working? Well since it could happen at any time, this is a bit tougher. Of course, watching your array is an option. Is space going down? If yes, it is likely to be working.

From an ESXi perspective, you can run this on an ESXi server that hosts the volume:

vsish -e get /vmkModules/vmfs3/auto_unmap/volumes/FlashArrayVMFSUNMAP/properties

Volume specific unmap information {
 Volume Name :FlashArrayVMFSUNMAP
 FS Major Version :24
 Metadata Alignment :4096
 Allocation Unit/Blocksize :1048576
 Unmap granularity in File :1048576
 Volume: Unmap IOs :2
 Volume: Unmapped blocks :48
 Volume: Num wait cycles :0
 Volume: Num from scanning :2088
 Volume: Num from heap pool :2
 Volume: Total num cycles :20225
}

The bold portions will say if anything has been issued to that VMFS. Vsish is not technically supported by VMware, so I am looking for a better option here.

Verifying In-Guest UNMAP Support with Linux

In vSphere 6.0, the ability to reclaim in-guest dead space with native OS UNMAP was introduced. Due to SCSI versioning support, only Windows 2012 R2 was able to take advantage of this behavior.

See this from VMware KB 2112333:

“…Some guest OSes that support unmapping of blocks, such as Linux-based systems, do not generate UNMAP commands on virtual disks in vSphere 6.0. This occurs because the level of SCSI support for ESXi 6.0 virtual disks is SCSI-2, while Linux expects 5 or higher for SPC-4 standard. This limitation prevents generation of UNMAP commands until the virtual disks are able to claim support for at least SPC-4 SCSI commands…”

In vSphere 6.5, SPC-4 support was added enabling in-guest UNMAP with Linux-based virtual machines. So what are the requirements to get this to work?

  • ESXi 6.5
  • Thin virtual disks
  • Guest that supports UNMAP (as well as its filesystem)

First, how do we know it is working? Well there are a couple of ways. Of course you can check the above requirements. But how about from the guest?

So two things. First is it the right type of virtual disk (thin)? This can be achieved with the sg_vpd utility. You want to check the 0xb2 page of the virtual disk. Simplest way is a command like so:

sg_vpd /dev/sdc -p lbpv

Of course replace the device identifier (/dev/sdx).

The property you will look at is “Unmap command supported (LBPU):”.

If it is set to 0, it is thick (eager or sparse):

pureuser@ubuntu:/mnt$ sudo sg_vpd /dev/sdc -p lbpv
Logical block provisioning VPD page (SBC):
 Unmap command supported (LBPU): 0
 Write same (16) with unmap bit supported (LBWS): 0
 Write same (10) with unmap bit supported (LBWS10): 0
 Logical block provisioning read zeros (LBPRZ): 0
 Anchored LBAs supported (ANC_SUP): 0
 Threshold exponent: 1
 Descriptor present (DP): 0
 Provisioning type: 0

If it is set to 1 it is thin:

pureuser@ubuntu:/mnt/unmap$ sudo sg_vpd /dev/sdc -p lbpv
Logical block provisioning VPD page (SBC):
 Unmap command supported (LBPU): 1
 Write same (16) with unmap bit supported (LBWS): 0
 Write same (10) with unmap bit supported (LBWS10): 0
 Logical block provisioning read zeros (LBPRZ): 1
 Anchored LBAs supported (ANC_SUP): 0
 Threshold exponent: 1
 Descriptor present (DP): 0
 Provisioning type: 2

Also provisioning type seems to be set to 2 when it is thin, but I haven’t done enough testing or investigation to confirm that is true in all cases.

This will still report as 1 in ESXi 6.0 and earlier, just UNMAP operations do not work due to the SCSI version. So how do I know the SCSI version? Well sg_inq is your friend here. So, run the following, once again replacing your device:

sg_inq /dev/sdc -d

On a ESXi 6.0 virtual disk, we see this:

pureuser@ubuntu:/mnt/unmap$ sudo sg_inq /dev/sdc -d
standard INQUIRY:
 PQual=0 Device_type=0 RMB=0 version=0x02 [SCSI-2]
 [AERC=0] [TrmTsk=0] NormACA=0 HiSUP=0 Resp_data_format=2
 SCCS=0 ACC=0 TPGS=0 3PC=0 Protect=0 [BQue=0]
 EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
 [RelAdr=0] WBus16=1 Sync=1 Linked=0 [TranDis=0] CmdQue=1
 length=36 (0x24) Peripheral device type: disk
 Vendor identification: VMware
 Product identification: Virtual disk
 Product revision level: 1.0

Note the version on the top and the virtual disk revision level (SCSI 2  as well as revision level 1). Now for a virtual disk in 6.5:

pureuser@ubuntu:/mnt$ sudo sg_inq -d /dev/sdb
standard INQUIRY:
 PQual=0 Device_type=0 RMB=0 version=0x06 [SPC-4]
 [AERC=0] [TrmTsk=0] NormACA=0 HiSUP=0 Resp_data_format=2
 SCCS=0 ACC=0 TPGS=0 3PC=0 Protect=0 [BQue=0]
 EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
 [RelAdr=0] WBus16=1 Sync=1 Linked=0 [TranDis=0] CmdQue=1
 length=36 (0x24) Peripheral device type: disk
 Vendor identification: VMware
 Product identification: Virtual disk
 Product revision level: 2.0

SCSI version 6 and SPC-4 compliancy! Sweeeet. Also product revision level 2–this is the first revision of the virtual disk level I have seen. In 6.5 it is now two. This has to do with the SCSI version increase.

Okay. We have identified that we are supported. How does it work?

Executing In-Guest UNMAP with Linux

So there are a few options for reclaiming space in Linux:

  1. Mounting the filesystem with the discard option. This reclaims space automatically when files are deleted
  2. Running sg_unmap. This allows you to run UNMAP on specific LBAs.
  3. Running fstrim. This issues trim commands which ESXi converts to UNMAP operations at the vSCSI layer

So, the discard option is by far the best option. sg_unmap requires quite a bit of manual work and a decent know-how of logical block address placement. Fstrim, still runs into some alignment issues–I am still working on this.

Let’s walk through the discard version.

In the following scenario an EXT4 filesystem was created and mounted in an Ubuntu guest with the discard option.

pureuser@ubuntu:/mnt$ sudo mount /dev/sdd /mnt/unmaptest -o discard

This virtual disk was thinly provisioned. Four files were added to the filesystem of about 13.5 GB in aggregate size.
files

The thin virtual disk grew to 13.5 GB after the file placement.

thingrew

The files were deleted with the rm command

deletefiles

Due to the discard option being used, the space was reclaimed. The virtual disk shrunk by 13 GB:

shrunkfile

ESXi then issued its own UNMAP command because EnableBlockDelete was enabled on the host that was running the Ubuntu virtual machine. ESXTOP shows UNMAP being issued in the DELETE column:

esxtop

The space is then reclaimed on the FlashArray volume hosting the VMFS datastore that contains the thin virtual disk:

gui

So one thing you might note is that it is not exactly 100% efficient. I’ve noticed that too. The thin virtual disk does not always quite shrink down to the exact size of the reduction. It gets close but not perfect. Stay tuned on this–I’ve made progress on some additional details surrounding this.

This is just the start, a lot more 6.5 blogging coming up!

17 thoughts on “What’s new in ESXi 6.5 Storage Part I: UNMAP”

    1. Thanks!! It should work with local disks as long as those have the proper trim/unmap support on them. Though, I cannot guarantee that as I do not have any local disk with the proper support to test it. But in theory the vSCSI layer doesnt treat them differently than SAN attached storage conceptually. I am not sure about the RPC truncate either (I don’t have any NFS)–though let me ask around!

  1. Do I need to enable EnableBlockDelete on VMFS6 with next command ?

    esxcli settings advanced set –int-value 1 –option /VMFS3/EnableBlockDelete

      1. Cody thanks for your reply!

        Since I’m not native speaker, I want to ensure I have understood you correctly so I will re-phrase my question: do I need to enable UNMAP somehow for VMFS6 in vSphere 6.0/6.5 ?

        Thanks!

          1. To use in-guest UNMAP the virtual disk needs to be thin. The underlying physical LUN does not have to be thin, but most are these days. To use VMFS UNMAP–the physical LUN does have to be thin (for the most part, though there are some exceptions out there)

          1. This is still required in VMFS 5 (vSphere 6.0 or 6.5). If you are using VMFS-6 in vSphere 6.5 you do not need to use that command for that volume because ESXi will automatically unmap in the background. Though it can take some time (up to a day). If you want to reclaim space immediately though you can still run the command

        1. You are very welcome! VMFS 6 is only in vSphere 6.5 so it is not in 6.0. If you format a volume in vSphere 6.5 as VMFS-6 UNMAP is enabled by default–you do not need to do anything to enable it.

Leave a Reply

Your email address will not be published. Required fields are marked *