As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.
Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading NMP Multipathing rules for the FlashArray are now default
Sorry the title is a bit of a mouthful.
I have written some posts on iSCSI in the past, around setup:
Setting up iSCSI with VMware ESXi and the FlashArray
Configuring iSCSI CHAP in VMware with the FlashArray
Another look at ESXi iSCSI Multipathing (or a Lack Thereof)
These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:
So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.
This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.
Continue reading Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client
A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.
The TLDR is ESXi does not support LUN IDs above 255 for your average device.
UPDATE (8/15/2017) I have been meaning to update this post for awhile. Here are the rules:
ESXi 6.5 does support LUN ID higher than 255, but only if those addresses are configured using peripheral LUN addressing. If your array uses flat addressing, it will not work (which is common for higher level LUN IDs).
ESXi 6.7 does now support flat LUN addressing so this problem goes away entirely.
See this post for more information on ESXi 6.7 flat support.
*It’s not actually aliens, it is perfectly normal SCSI you silly man.
Continue reading ESXi and the Missing LUNs: 256 or Higher
So a few updates. I just updated my vSphere Best Practices guide and it can be found here:
Download Best Practices Guide PDF
I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.
- FlashArray Plugin for vRealize Orchestrator User Guide
- Implementing FlashArray in a vRealize Private Cloud
Continue reading Documentation Update, Best Practices and vRealize
This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:
In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows
I posted about In-Guest UNMAP with Linux VMs in this post:
What’s new in ESXi 6.5 Storage Part I: UNMAP
One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux
As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.
First off, some official information:
Manual patch download:
Or you can run esxcli if you ESXi host has internet access to download and install automatically:
esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows
So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:
- Array volume queue depth limit
- Datastore queue depth limit
- Virtual Machine vSCSI Adapter queue depth limit
- Virtual Disk queue depth limit
I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.
- This is a simple example to explain how queuing works in ESXi
- Mileage will vary depending on your workload and configuration
- This workload is targeted specifically to make relationships easier to understand
- PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.
I am sorry, this is a long one. But hopefully informative!
If you prefer a video, here is my 1 hr VMworld session that goes into depth on what I write below:
Continue reading Understanding VMware ESXi Queuing and the FlashArray
So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:
- They disabled automatic UNMAP on the VMFS for some reason
- They need to get space back quickly and don’t have time to wait
When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading Issue with Manual VMFS-6 UNMAP and Block Count
Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .
Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.
Announcing vSphere 6.5 Core Storage white paper
Check out the podcast here:
This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:
What’s new in ESXi 6.5 Storage Part I: UNMAP
What’s new in ESXi 6.5 Storage Part II: Resignaturing
What’s new in ESXi 6.5 Storage Part III: Thin hot extend
Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…
Continue reading What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support