Tag Archives: VMware

What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

vSphere 6.7 core storage “what’s new” series:

Another feature added in vSphere 6.7 is support for a guest being able to issue UNMAP to a virtual disk when presented through the NVMe controller.

Continue reading What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

What is the latency stat QAVG?

I wrote a blog post a year or so ago about ESXi and storage queues which has received a lot of wonderful feedback (thank you!!) and I eventually turned it into a VMworld session and other engagements:

So in the past year I have had quite a few discussions around this. And one part has always bothered me a bit.

In ESXI, there are a variety of latency metrics:

  • GAVG. Guest average. Sometimes called “VM observed latency”. This is the amount of time it takes for an I/O to be completed, after it leaves the VM. So through ESXi, through the SAN (or iSCSI network) and committed to the array and acknowledged back.
  • KAVG. Kernel average. This is how long an I/O is spending in the ESXi kernel. If this is anything but zero, there is some kind of bottleneck (often a maxed out queue)
  • DAVG. This is how long it takes for the I/O to be sent from host, through the SAN and to the array and acknowledged back.

Continue reading What is the latency stat QAVG?

FlashArray Plugin 2.0 for vRealize Orchestrator

We have published the FlashArray plugin 2.0 for vRealize Orchestrator on the VMware Solutions Exchange! Download it here:

https://marketplace.vmware.com/vsx/solutions/pure-storage-flasharray-plugin-for-vmware-vrealize-orchestrator-2-0-0

We put a lot of work into this one and I am quite excited for customers and partners to start using it.

There are three primary enhancements:

  1. New workflows
  2. New actions
  3. New scriptable objects

Continue reading FlashArray Plugin 2.0 for vRealize Orchestrator

VMware Capacity Reporting Part IV: VVol Capacity Reporting

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts. Continue reading VMware Capacity Reporting Part IV: VVol Capacity Reporting

VMware Storage Capacity Reporting Part II: VMFS and Thick Virtual Disks

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better. Continue reading VMware Storage Capacity Reporting Part II: VMFS and Thick Virtual Disks

ActiveCluster and vMSC Implementation Guide

Quick post. I have published my ActiveCluster implementation guide for vSphere Metro Storage Cluster (vMSC). You can find it here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/ActiveCluster_with_VMware/Implementing_vSphere_Metro_Storage_Cluster_With_ActiveCluster

ActiveCluster, if you are unfamiliar, is the Pure Storage FlashArray implementation of fully Active/Active replication. Meaning a volume can exist and take write simultaneously from two arrays. No additional hardware or licenses required. Comes with Purity 5.0.0. A main focus, like everything else Pure, is simplicity–that goal has definitely been achieved in my view.

Continue reading ActiveCluster and vMSC Implementation Guide

VMware Storage Capacity Reporting Part I: VMFS and Thin Virtual Disks

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

Continue reading VMware Storage Capacity Reporting Part I: VMFS and Thin Virtual Disks

Announcing Pure Storage FlashArray VVol GA

This is a blog post I have been waiting to write for a long time. We at Pure Storage are pleased to announce that vSphere Virtual Volume support on the FlashArray is officially GA!

The FlashArray now supports running VVols in Purity 5.0.0 and later. The cool thing about the FlashArray is the flexibility of the Purity Operating Environment–so VVols are supported on all FA 400 models (405, 420, and 450), //M models (m10, m20, m50, m70) and FlashArray//X. Continue reading Announcing Pure Storage FlashArray VVol GA

VVol Lightboard Videos

Quick post. I did some light board videos together on vSphere Virtual Volumes. Lightboard videos are pretty fun to do, the unfortunate part is that I have horrible hand writing. So I immediately apologize for that.

A common question I get with these videos is how do you write backwards. I don’t. I am nowhere near that skilled, as you can see I can barely write forwards. I write normally which appears backwards and the video team mirrors the video.

This is a three part series, the entire playlist can be found here:

Continue reading VVol Lightboard Videos

Moving from an RDM to a VVol

Migrating VMDKs or virtual mode RDMs to VVols is easy: Storage vMotion. No downtime, no pre-creating of volumes. Simple and fast. But physical mode RDMs are a bit different.

As we all begrudgingly admit there are still more than a few Raw Device Mappings out there in VMware environments. Two primary use cases:

  • Microsoft Clustering. Virtual disks can only be used for Failover Clustering if all of the VMs are on the same ESXi hosts which feels a bit like defeating the purpose. So most opt for RDMs so they can split the VMs up.
  • Physical to virtual. Sharing copies of data between physical and virtual or some other hypervisor is the most common reason I see these days. Mostly around database dev/test scenarios. The concept of a VMDK can keep your data from being easily shared, so RDMs provide a workaround.

Continue reading Moving from an RDM to a VVol