VMFS UNMAP switches block count

A recent question I got about my UNMAP PowerCLI script was it says it was using a certain block count but when I looked at the log it was using 200. Why?

Well I blogged before about why a given UNMAP process might revert to the default block count of 200 here. Essentially, if you indicate a block count larger than 1% of the free space of the VMFS ESXi will revert it to 200. Or if the VMFS is more than 75% full it will always override the block count back down to 200.

The question was, does my script calculate the optimal block count (the larger the block count, the faster the reclaim process runs on the FlashArray). It does. It finds the 1% number and if it detects the volume is 75% full it will use 200 because it has no choice.

Great, so what’s the problem here?

Well the situation was that it was not 75% full so the script calculated the 1% value and it reported in the script log file as follows:

7/27/2016 11:17:02 AM
The datastore named InfrastructureCody is being examined
The UUID for this volume is:
naa.624a93705bf1c41fe35945c4000110ca
The volume is on the FlashArray csg-fa420-1.purecloud.local
This datastore is a Pure Storage volume named InfrastructureCody

The ESXi named esxi-03.purecloud.local will run the UNMAP/reclaim operation

The current data reduction for this volume before UNMAP is 4.609 to 1
The physical space consumption in MB of this device before UNMAP is 46,525.651

The maximum allowed block count for this datastore is 364285

The UNMAP for this process was taking a very long time, so the person looked at the hostd log on the ESXi server running the UNMAP procedure and saw it was using a block count of 200, not the 364,285 that was reported in ESXi:

2016-07-27T16:13:31.815Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 200 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49

ESXi reports in this log each step inside of an UNMAP procedure and the block count of that step until the process has issued UNMAP to all of the free blocks on the VMFS. So why was it 200?

There are only two reasons it reverts to 200. This check is performed at EVERY step of the process. So while a calculated non-default block can be correct when the process begins, changes of allocation on the VMFS may occur. If the volume suddenly becomes 75% full for instance, due to a new VM or a VM that grows, the original calculation may be incorrect and ESXi will revert to 200.

So let’s take this example. I have a 4 TB VMFS. It is completely empty, so my block count that is equal to 1% of that free space would be around 41,943 blocks ( ~40 GB). So I will use 41,900, which is just slightly below 1% of the free space. So my UNMAP command looks like:

esxcli storage vmfs unmap -l unmaptestvol -n 41900

Then I immediately deploy a 3.5 TB thick virtual disk to consume a bunch of space on that VMFS:

vmkfstools --createvirtualdisk 3400G --diskformat zeroedthick testdisk1.vmdk

So now the free space is only 600 GB, so the appropriate block count would be 6,000 (6 GB) which is far smaller than the 41900 that was issued with the command. You cannot change the block count once the command has been sent, so as soon as that value is no longer valid it will override the rest of the steps in the UNMAP. So look at my hostd log:

2016-07-27T16:13:31.070Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 41900 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49
2016-07-27T16:13:31.270Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 41900 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49
2016-07-27T16:13:31.419Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 41900 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49
2016-07-27T16:13:31.750Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 41900 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49
2016-07-27T16:13:31.815Z info hostd[25A81B70] [Originator@6876 sub=Libs opID=c67376dc user=root] Unmap: Async Unmapped 200 blocks from volume 5798cfbb-b2fb7f3a-ba13-0050cc692d49

It was fine for the first 4 steps, but by the 5th step, the VMFS free capacity had changed, so the value was overridden for the remaining steps for the remainder of that UNMAP process.

So if you see your UNMAP process slowing down even if the block count was changeable at first, it might be because the state of the VMFS free capacity changed enough during the UNMAP run.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.