In Purity 5.1 there were a variety of new features introduced on the FlashArray like CloudSnap to NFS or volume throughput limits, but there were also a variety of internal enhancements. I’d like to start this series with one of them.
VAAI (VMware API for Array Integration) includes a variety of offloads that allow the underlying array to do certain storage-related tasks better (either faster, more efficiently, etc.) than ESXi can do them. One of these offloads is called Block Zero, which leverages the SCSI command called WRITE SAME. WRITE SAME is basically a SCSI operation that tells the storage to write a certain pattern, in this case zeros. So instead of ESXi issuing possibly terabytes of zeros, ESXi just issues a few hundred or thousand small WRITE SAME I/Os and the array takes care of the zeroing. This greatly speeds up the process and also significantly reduces the impact on the SAN.
WRITE SAME is used in quite a few places, but the most commonly encountered scenarios are:
vSphere 6.7 core storage “what’s new” series:
VMware has continued to improve and refine automatic UNMAP in vSphere 6.7. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted.
Continue reading What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP
Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).
This multi-part series will break it down in the following sections:
- VMFS and thin virtual disks
- VMFS and thick virtual disks
- Thoughts on VMFS Capacity Reporting
- VVols and capacity reporting
- VVols and UNMAP
Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.
NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts.
Continue reading VMware Capacity Reporting Part V: VVols and UNMAP
Pure Storage recently offered up support for active/active replication for the FlashArray in a feature called ActiveCluster. And a common question that comes up for active/active solutions alongside VMware is how is VAAI supported?
The reason it is asked is that it is often tricky if not impossible to support in an active/active scenario. Because the storage platform first has to perform the operation on one array, but also on the other. So XCOPY, which offloads VM copying, is often not supported. Let’s take a look at VAAI with ActiveCluster, specifically these four features:
- Hardware Assisted Locking (ATOMIC TEST & SET)
- Zero Offload (WRITE SAME)
- Space Reclamation (UNMAP)
- Copy Offload (XCOPY)
Continue reading ActiveCluster and VAAI
With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.
So you do find yourself wondering, did it actually reclaim anything?
Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?
Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi
EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.
The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6
I updated my UNMAP PowerCLI script a month or so ago and improved quite a few things–but I did remove hard-coded variables and replaced it with interactive input. Which is fine for some, but for many it was not.
Note: Move to VMFS-6 in vSphere 6.5 and you don’t have to worry about this UNMAP business anymore 🙂
Essentially, quite a few people want to run it as a scheduled task in Windows, and if it requires input that just isn’t going to work out of the box. So I have created an unattended version of the script. For details read on.
Note: I will continue to update the script (bugs, features, etc.) but will note them on my other blog post about the script here:
Pure Storage FlashArray UNMAP PowerCLI Script for VMware ESXi
I will only update this post if the unattended version changes in a way that makes these instructions wrong. Continue reading Unattended VMFS UNMAP Script
This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:
In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows
I posted about In-Guest UNMAP with Linux VMs in this post:
What’s new in ESXi 6.5 Storage Part I: UNMAP
One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux
As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.
First off, some official information:
Manual patch download:
Or you can run esxcli if you ESXi host has internet access to download and install automatically:
esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows
So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:
- They disabled automatic UNMAP on the VMFS for some reason
- They need to get space back quickly and don’t have time to wait
When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading Issue with Manual VMFS-6 UNMAP and Block Count