Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

Sorry the title is a bit of a mouthful.

I have written some posts on iSCSI in the past, around setup:

Setting up iSCSI with VMware ESXi and the FlashArray

Configuring iSCSI CHAP in VMware with the FlashArray

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:

http://everything-virtual.com/installing-the-home-lab/installing-the-home-lab-creating-and-configuring-an-iscsi-distributed-switch-for-vmware-multipathing/

https://www.yelof.com/2011/07/13/dr-iscsi-or-how-i-learned-to-stop-worrying-and-love-virtual-distributed-switches-on-vsphere-v5/

So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.

This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.

Continue reading “Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client”

ESXi and the Missing LUNs: 256 or Higher

A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.

The TLDR is ESXi does not support LUN IDs above 255 for your average device.

UPDATE (8/15/2017) I have been meaning to update this post for awhile. Here are the rules:

ESXi 6.5 does support LUN ID higher than 255, but only if those addresses are configured using peripheral LUN addressing. If your array uses flat addressing, it will not work (which is common for higher level LUN IDs).

ESXi 6.7 does now support flat LUN addressing so this problem goes away entirely.

See this post for more information on ESXi 6.7 flat support.

*It’s not actually aliens, it is perfectly normal SCSI you silly man.

Continue reading “ESXi and the Missing LUNs: 256 or Higher”

Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading “Documentation Update, Best Practices and vRealize”

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

I posted about In-Guest UNMAP with Linux VMs in this post:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux”

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.

First off, some official information:

Release notes:

https://kb.vmware.com/kb/2148989

Manual patch download:

https://my.vmware.com/group/vmware/patch#search

Or you can run esxcli if you ESXi host has internet access to download and install automatically:

esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows”

Understanding VMware ESXi Queuing and the FlashArray

So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:

  • Array volume queue depth limit
  • Datastore queue depth limit
  • Virtual Machine vSCSI Adapter queue depth limit
  • Virtual Disk queue depth limit

I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.

Please note:

  • This is a simple example to explain how queuing works in ESXi
  • Mileage will vary depending on your workload and configuration
  • This workload is targeted specifically to make relationships easier to understand
  • PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.

I am sorry, this is a long one. But hopefully informative!

If you prefer a video, here is my 1 hr VMworld session that goes into depth on what I write below:

Continue reading “Understanding VMware ESXi Queuing and the FlashArray”

Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading “Issue with Manual VMFS-6 UNMAP and Block Count”

What’s New in vSphere 6.5 Storage vSpeaking Podcast

Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .

Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.

https://blogs.vmware.com/virtualblocks/2016/11/28/announcing-vsphere-6-5-core-storage-white-paper/

Check out the podcast here:

https://soundcloud.com/virtuallyspeakingpodcast/episode-34-vsphere-65-core-storage

What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…

leecorso

Continue reading “What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support”

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:

https://storagehub.vmware.com/#!/vsphere-core-storage/

I had the distinct pleasure of helping Cormac and Paudie with this paper. Thanks to both of them for including me and providing me with access to the engineers who wrote these features/enhancements!

So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.

Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:

hotextend60u2If the VM is turned on and I try to apply this configuration change, I get an error:
hotextendfail

So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:

hotextend65good

hotextendsuccess

Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.