Tag Archives: esxi

Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6

NMP Multipathing rules for the FlashArray are now default

As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.

Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading NMP Multipathing rules for the FlashArray are now default

Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

Sorry the title is a bit of a mouthful.

I have written some posts on iSCSI in the past, around setup:

Setting up iSCSI with VMware ESXi and the FlashArray

Configuring iSCSI CHAP in VMware with the FlashArray

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:

http://everything-virtual.com/installing-the-home-lab/installing-the-home-lab-creating-and-configuring-an-iscsi-distributed-switch-for-vmware-multipathing/

https://www.yelof.com/2011/07/13/dr-iscsi-or-how-i-learned-to-stop-worrying-and-love-virtual-distributed-switches-on-vsphere-v5/

So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.

This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.

Continue reading Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

ESXi and the Missing LUNs: 256 or Higher

A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.

The TLDR is ESXi does not support LUN IDs above 255 for your average device.

*It’s not actually aliens, it is perfectly normal SCSI you silly man.

Continue reading ESXi and the Missing LUNs: 256 or Higher

Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading Documentation Update, Best Practices and vRealize

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

I posted about In-Guest UNMAP with Linux VMs in this post:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.

First off, some official information:

Release notes:

https://kb.vmware.com/kb/2148989

Manual patch download:

https://my.vmware.com/group/vmware/patch#search

Or you can run esxcli if you ESXi host has internet access to download and install automatically:

esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

Understanding VMware ESXi Queuing and the FlashArray

So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:

  • Array volume queue depth limit
  • Datastore queue depth limit
  • Virtual Machine vSCSI Adapter queue depth limit
  • Virtual Disk queue depth limit

I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.

Please note:

  • This is a simple example to explain how queuing works in ESXi
  • Mileage will vary depending on your workload and configuration
  • This workload is targeted specifically to make relationships easier to understand
  • PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.

I am sorry, this is a long one. But hopefully informative!
Continue reading Understanding VMware ESXi Queuing and the FlashArray

Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading Issue with Manual VMFS-6 UNMAP and Block Count