Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading “Issue with Manual VMFS-6 UNMAP and Block Count”

Managing In-Guest UNMAP and Automatic VMFS-6 UNMAP with PowerCLI

Another UNMAP post.  I was working on updating my best practices script the other day and I realized a lot of UNMAP configuration from a PowerCLI standpoint was not well documented, especially for the vSphere 6.5 stuff which introduces automatic UNMAP to VMFS. Automatic UNMAP  is great. But what if someone turns it off? Or what if, for some reason, I want to disable it? Or I want to make  sure it is on? Well there are a lot ways to do this–so let’s look at PowerCLI.

Continue reading “Managing In-Guest UNMAP and Automatic VMFS-6 UNMAP with PowerCLI”

Detecting what FlashArray VMFS Volumes Have Dead Space

Another UNMAP post, are you shocked? A common question that came up was what volumes have dead space? What datastores should I run UNMAP on?

My usual response was, well it is hard to say. Dead space is introduced when you move a VM or you delete one. The array will not release the space until you either delete the physical volume, overwrite it, or issue UNMAP. Until vSphere 6.5, UNMAP for VMFS was not automatic. You had to run a CLI command to do it. So that leads back to the question, well I have 100 datastores, which ones should I run it on?

So to find out, you need to know two things:

  1. How much space the file system reports as currently being used.
  2. How much space the array is physically storing for the volume hosting that file system.

Continue reading “Detecting what FlashArray VMFS Volumes Have Dead Space”

VMFS Capacity Monitoring in a Data Reducing World

A question recently came up on the Pure Storage Community Forum about VMFS capacity alerts that said, to paraphrase:

“I am constantly getting capacity threshold (75%) alerts on my VMFS volumes but when I look at my FlashArray volume used capacity it is nowhere near that in used space. What can I do to make the VMware number closer to the FlashArray one so I don’t get these alerts?”

This comment really boils down to what is the difference between these numbers and how do I handle it? So, let’s dig into this. Continue reading “VMFS Capacity Monitoring in a Data Reducing World”

What’s New in vSphere 6.5 Storage vSpeaking Podcast

Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .

Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.

https://blogs.vmware.com/virtualblocks/2016/11/28/announcing-vsphere-6-5-core-storage-white-paper/

Check out the podcast here:

https://soundcloud.com/virtuallyspeakingpodcast/episode-34-vsphere-65-core-storage

Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware

I posted shortly after ESXi 6.0 came out a while back explaining how to do in-guest UNMAP with Windows. See the original post here:

Direct Guest OS UNMAP in vSphere 6.0

The high-level workflow if you don’t want to read the post is:

  1. You delete a file in Windows
  2. Run Disk Optimizer to reclaim the space
  3. Windows issues UNMAP to the filesystem
  4. ESXi shrinks the virtual disk
  5. If EnableBlockDelete is enabled on the ESXi hosts, ESXi will issue UNMAP to reclaim the space on the array

This had a few requirements:

  • ESXi 6.0+
  • VM hardware version 11+
  • Thin virtual disk
  • CBT cannot be enabled (though this restriction is removed in ESXi 6.5 see this post)

Continue reading “Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware”

What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…

leecorso

Continue reading “What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support”

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:

https://storagehub.vmware.com/#!/vsphere-core-storage/

I had the distinct pleasure of helping Cormac and Paudie with this paper. Thanks to both of them for including me and providing me with access to the engineers who wrote these features/enhancements!

So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.

Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:

hotextend60u2If the VM is turned on and I try to apply this configuration change, I get an error:
hotextendfail

So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:

hotextend65good

hotextendsuccess

Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.

 

 

What’s new in ESXi 6.5 Storage Part II: Resignaturing

My second post in my vSphere 6.5 series, the first being:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One of the new features from a core storage perspective is a new version of VMFS. In vSphere 6.5, VMware has released VMFS 6, the first major update of VMFS in year (VMFS 5 in 2011). Not earth shattering changes, a lot of pain points have been removed and there has been A LOT of work put into VMFS 6 to improve concurrency of operations and speed up certain procedures. The first thing I want to mention is unresolved volume handling. Continue reading “What’s new in ESXi 6.5 Storage Part II: Resignaturing”