Category Archives: ESXi

ESXi and the Missing LUNs: 256 or Higher

A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.

The TLDR is ESXi does not support LUN IDs above 255 for your average device.

*It’s not actually aliens, it is perfectly normal SCSI you silly man.

Continue reading ESXi and the Missing LUNs: 256 or Higher

Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading Documentation Update, Best Practices and vRealize

Understanding VMware ESXi Queuing and the FlashArray

So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:

  • Array volume queue depth limit
  • Datastore queue depth limit
  • Virtual Machine vSCSI Adapter queue depth limit
  • Virtual Disk queue depth limit

I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.

Please note:

  • This is a simple example to explain how queuing works in ESXi
  • Mileage will vary depending on your workload and configuration
  • This workload is targeted specifically to make relationships easier to understand
  • PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.

I am sorry, this is a long one. But hopefully informative!
Continue reading Understanding VMware ESXi Queuing and the FlashArray

Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware

I posted shortly after ESXi 6.0 came out a while back explaining how to do in-guest UNMAP with Windows. See the original post here:

Direct Guest OS UNMAP in vSphere 6.0

The high-level workflow if you don’t want to read the post is:

  1. You delete a file in Windows
  2. Run Disk Optimizer to reclaim the space
  3. Windows issues UNMAP to the filesystem
  4. ESXi shrinks the virtual disk
  5. If EnableBlockDelete is enabled on the ESXi hosts, ESXi will issue UNMAP to reclaim the space on the array

This had a few requirements:

  • ESXi 6.0+
  • VM hardware version 11+
  • Thin virtual disk
  • CBT cannot be enabled (though this restriction is removed in ESXi 6.5 see this post)

Continue reading Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware

What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…

leecorso

Continue reading What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:

https://storagehub.vmware.com/#!/vsphere-core-storage/

I had the distinct pleasure of helping Cormac and Paudie with this paper. Thanks to both of them for including me and providing me with access to the engineers who wrote these features/enhancements!

So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.

Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:

hotextend60u2If the VM is turned on and I try to apply this configuration change, I get an error:
hotextendfail

So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:

hotextend65good

hotextendsuccess

Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.

 

 

Automatic VMFS expansion with vCenter SNMP and vRealize Orchestrator

Virtual disk oversubscription is becoming increasingly common and so is allowing people to provision their own VMs. So increasing a datastore capacity is also an increasingly common operation. Because of the performance of flash, merged with ESXi features like VAAI ATS. Expanding a VMFS is easy. Expanding a storage volume these days is easy. But you still have to actually do it. What if I want to automate the process to respond to datastore capacity threshold limits? There are a variety of ways to achieve this. Let’s look at it via vCenter SNMP alerts and vRealize Orchestrator workflows. Continue reading Automatic VMFS expansion with vCenter SNMP and vRealize Orchestrator

Updated FlashArray VMware Best Practices PowerCLI Scripts

I wrote a post recently on the updates made to the PowerCLI 6.3 R1 esxcli implementation, so the logical next step was to implement this new behavior into my PowerCLI scripts that use esxcli. I still have a few scripts to update, but my two best practice-related scripts are ready to go. The two scripts are:

  1. Script to check and set best practices. Download here:
  2. Script to just check best practices, and lists issues in a report. Download here.

While I was updating them for esxcli changes, I figured i might as well improve them too, so there are quite a few changes for both. Let’s take a look.

Continue reading Updated FlashArray VMware Best Practices PowerCLI Scripts

Recent ESXi 6 Storage Bugs and the FlashArray

As you might be aware, there have been a few storage-related issues with ESXi 6.0 as of late:

Accidental PDL during dropped paths:

Storage PDL responses may not trigger path failover in vSphere 6.0 (2144657)

Host issues during smartd inquiries:

Issuing a 0x85 SCSI Command from a VMware ESXi 6.0 host results in a PDL error (2133286)

The question that comes up for the Pure Storage FlashArray is are we susceptible? The short answer is no. Let’s explain why. Continue reading Recent ESXi 6 Storage Bugs and the FlashArray

Semi-transparent failover with VMFS and Active/Passive Replication

So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.

An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.

Continue reading Semi-transparent failover with VMFS and Active/Passive Replication