Another UNMAP post. I was working on updating my best practices script the other day and I realized a lot of UNMAP configuration from a PowerCLI standpoint was not well documented, especially for the vSphere 6.5 stuff which introduces automatic UNMAP to VMFS. Automatic UNMAP is great. But what if someone turns it off? Or what if, for some reason, I want to disable it? Or I want to make sure it is on? Well there are a lot ways to do this–so let’s look at PowerCLI.
Another UNMAP post, are you shocked? A common question that came up was what volumes have dead space? What datastores should I run UNMAP on?
My usual response was, well it is hard to say. Dead space is introduced when you move a VM or you delete one. The array will not release the space until you either delete the physical volume, overwrite it, or issue UNMAP. Until vSphere 6.5, UNMAP for VMFS was not automatic. You had to run a CLI command to do it. So that leads back to the question, well I have 100 datastores, which ones should I run it on?
So to find out, you need to know two things:
- How much space the file system reports as currently being used.
- How much space the array is physically storing for the volume hosting that file system.
A question recently came up on the Pure Storage Community Forum about VMFS capacity alerts that said, to paraphrase:
“I am constantly getting capacity threshold (75%) alerts on my VMFS volumes but when I look at my FlashArray volume used capacity it is nowhere near that in used space. What can I do to make the VMware number closer to the FlashArray one so I don’t get these alerts?”
This comment really boils down to what is the difference between these numbers and how do I handle it? So, let’s dig into this. Continue reading VMFS Capacity Monitoring in a Data Reducing World
Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .
Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.
Check out the podcast here:
I posted shortly after ESXi 6.0 came out a while back explaining how to do in-guest UNMAP with Windows. See the original post here:
The high-level workflow if you don’t want to read the post is:
- You delete a file in Windows
- Run Disk Optimizer to reclaim the space
- Windows issues UNMAP to the filesystem
- ESXi shrinks the virtual disk
- If EnableBlockDelete is enabled on the ESXi hosts, ESXi will issue UNMAP to reclaim the space on the array
This had a few requirements:
- ESXi 6.0+
- VM hardware version 11+
- Thin virtual disk
- CBT cannot be enabled (though this restriction is removed in ESXi 6.5 see this post)
This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:
Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…
Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:
So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:
This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.
Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:
So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:
Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.
My second post in my vSphere 6.5 series, the first being:
One of the new features from a core storage perspective is a new version of VMFS. In vSphere 6.5, VMware has released VMFS 6, the first major update of VMFS in year (VMFS 5 in 2011). Not earth shattering changes, a lot of pain points have been removed and there has been A LOT of work put into VMFS 6 to improve concurrency of operations and speed up certain procedures. The first thing I want to mention is unresolved volume handling. Continue reading What’s new in ESXi 6.5 Storage Part II: Resignaturing
So as you might be aware, vSphere 6.5 just went GA.
There is quite a bit of new stuff in this release and there have certainly been quite a few blogs concerning the flagship features. I want to take some time to dive into some new core storage features that might be somewhat less heralded. Let’s start with my favorite topic. UNMAP. Continue reading What’s new in ESXi 6.5 Storage Part I: UNMAP
First off, there already is a built-in workflow for adding a new virtual disk so this isn’t exactly groundbreaking knowledge, but I think it is helpful to understand how it is constructed. Furthermore, most of the existing posts and community articles out there assume way too much about ones knowledge of reading the API guide and understanding what is needed.
So let’s boil it down to only what you need to know to create default, commonly used virtual disks. If you want more advanced configurations this should give you a good starting point. Knowing the basics makes it way it is easier to edit and change.
I will write this post on adding a new virtual disk and next I will write one on removing one. Continue reading Creating a new Virtual/Hard Disk with vRealize Orchestrator