Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB).

Many arrays, the FlashArray included, prefer the block count to be higher–this causes the process to finish much faster. As can be seen in this earlier post I wrote:

Deeper dive on vSphere UNMAP block count with Pure Storage

But there are some limits to what this block count can be since 5.5 P3:

UNMAP Block Count Behavior Change in ESXi 5.5 P3+

Anyways, I was updating my UNMAP script the other day (which can be found here) and noticed that some UNMAP operations were taking a long time. So to the hostd log I went! Even though I didn’t violate the rules described in the 5.5 P3 post above, it was still using 200 blocks no matter what I entered for some volumes. In this case I entered 100:

As you can see it does not break the 1% rule or the 75% full rule:

So what’s going on? Well I pinged my contact in VMware engineering and this is a bug and will be fixed. So basically, this post is an FYI, if your array likes higher block counts and your UNMAP is now slow, it is probably due to this.

That being said, come on people! Use the automatic UNMAP and move away from this manual stuff!

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.