Understanding VMware ESXi Queuing and the FlashArray

So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:

  • Array volume queue depth limit
  • Datastore queue depth limit
  • Virtual Machine vSCSI Adapter queue depth limit
  • Virtual Disk queue depth limit

I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.

Please note:

  • This is a simple example to explain how queuing works in ESXi
  • Mileage will vary depending on your workload and configuration
  • This workload is targeted specifically to make relationships easier to understand
  • PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.

I am sorry, this is a long one. But hopefully informative!
Continue reading Understanding VMware ESXi Queuing and the FlashArray

Introducing the FlashArray Plugin for vRealize Orchestrator v1.0

This is a blog I have been waiting a long time to write. The past year and a half of my work has heavily focused on improving and building our VMware vRealize integration at Pure Storage. Log Insight and Operations Manager integration already existed (analytics etc.), so the next logical step is actually provisioning (orchestration). So vRealize Orchestrator and Automation. The first step I took was using the built-in REST plugin in vRO to build a workflow package that customers could use to actually manage the FlashArray without much work on their own part inside of vRO.

I started to realize that a workflow package was not enough. Especially when it comes to vRA Anything-As-A-Service integration. A big part of what is missing from a workflow package is custom objects and inventory management. Something that a plugin can easily achieve. So, without further ado–please meet the FlashArray vRO plugin! Downloadable at the VMware Solution Exchange and fully certified by VMware and Pure Storage:

FlashArray vRO Download

My vRO plugin white paper Continue reading Introducing the FlashArray Plugin for vRealize Orchestrator v1.0

VMFS Snapshots and the FlashArray Part VII: Restoring a VM

This is part 7 of this 8 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this.

The series being:

  1. Mounting an unresolved VMFS
  2. Why not force mount?
  3. Why might a VMFS resignature operation fail?
  4. How to correlate a VMFS and a FlashArray volume
  5. How to snapshot a VMFS on the FlashArray
  6. How to mount a VMFS FlashArray snapshot
  7. Restoring a single VM from a FlashArray snapshot
  8. Restoring a single virtual disk from a FlashArray snapshot

Continue reading VMFS Snapshots and the FlashArray Part VII: Restoring a VM

PowerShell GUI VMware and FlashArray Snapshot Management Tool

There are a variety of methods of managing VMware objects (VMFS volumes, VMs, VMDKs and RDMs) and the underlying snapshots to recovery or clone them. But often I get asked if I have a PowerShell (PowerCLI) script to do one or all of them. I have a bunch on my GitHub, but I decided a week or so ago to put something a bit more robust together. At first I was making it a standard interactive script, but it morphed into a GUI, using combo-boxes etc:

Continue reading PowerShell GUI VMware and FlashArray Snapshot Management Tool

VMFS Snapshots and the FlashArray Part VII: Mounting a FlashArray VMFS Snapshot

This is part 6 of this 8 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this.

The series being:

  1. Mounting an unresolved VMFS
  2. Why not force mount?
  3. Why might a VMFS resignature operation fail?
  4. How to correlate a VMFS and a FlashArray volume
  5. How to snapshot a VMFS on the FlashArray
  6. How to mount a VMFS FlashArray snapshot
  7. Restoring a single VM from a FlashArray snapshot

Using vCenter and our Web Client plugin, recovering a snapshot is a pretty straight forward process. So the pre-requisite here is having our Web Client plugin installed and configured. Info on that here. If you want to know the manual steps, scroll down further and the whole process is described in detail that does not use the plugin–just our GUI and vCenter. Continue reading VMFS Snapshots and the FlashArray Part VII: Mounting a FlashArray VMFS Snapshot

Issue with Manual VMFS-6 UNMAP and Block Count

So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:

  • They disabled automatic UNMAP on the VMFS for some reason
  • They need to get space back quickly and don’t have time to wait

When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading Issue with Manual VMFS-6 UNMAP and Block Count

Managing In-Guest UNMAP and Automatic VMFS-6 UNMAP with PowerCLI

Another UNMAP post.  I was working on updating my best practices script the other day and I realized a lot of UNMAP configuration from a PowerCLI standpoint was not well documented, especially for the vSphere 6.5 stuff which introduces automatic UNMAP to VMFS. Automatic UNMAP  is great. But what if someone turns it off? Or what if, for some reason, I want to disable it? Or I want to make  sure it is on? Well there are a lot ways to do this–so let’s look at PowerCLI.

Continue reading Managing In-Guest UNMAP and Automatic VMFS-6 UNMAP with PowerCLI

Detecting what FlashArray VMFS Volumes Have Dead Space

Another UNMAP post, are you shocked? A common question that came up was what volumes have dead space? What datastores should I run UNMAP on?

My usual response was, well it is hard to say. Dead space is introduced when you move a VM or you delete one. The array will not release the space until you either delete the physical volume, overwrite it, or issue UNMAP. Until vSphere 6.5, UNMAP for VMFS was not automatic. You had to run a CLI command to do it. So that leads back to the question, well I have 100 datastores, which ones should I run it on?

So to find out, you need to know two things:

  1. How much space the file system reports as currently being used.
  2. How much space the array is physically storing for the volume hosting that file system.

Continue reading Detecting what FlashArray VMFS Volumes Have Dead Space

VMFS Capacity Monitoring in a Data Reducing World

A question recently came up on the Pure Storage Community Forum about VMFS capacity alerts that said, to paraphrase:

“I am constantly getting capacity threshold (75%) alerts on my VMFS volumes but when I look at my FlashArray volume used capacity it is nowhere near that in used space. What can I do to make the VMware number closer to the FlashArray one so I don’t get these alerts?”

This comment really boils down to what is the difference between these numbers and how do I handle it? So, let’s dig into this. Continue reading VMFS Capacity Monitoring in a Data Reducing World

What’s New in vSphere 6.5 Storage vSpeaking Podcast

Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .

Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.

Announcing vSphere 6.5 Core Storage white paper

Check out the podcast here:

"Remember kids, the only difference between Science and screwing around is writing it down"

%d bloggers like this: