ActiveCluster and VAAI

Pure Storage recently offered up support for active/active replication for the FlashArray in a feature called ActiveCluster. And a common question that comes up for active/active solutions alongside VMware is how is VAAI supported?

The reason it is asked is that it is often tricky if not impossible to support in an active/active scenario. Because the storage platform first has to perform the operation on one array, but also on the other. So XCOPY, which offloads VM copying, is often not supported. Let’s take a look at VAAI with ActiveCluster, specifically these four features:

  1. Hardware Assisted Locking (ATOMIC TEST & SET)
  2. Zero Offload (WRITE SAME)
  3. Space Reclamation (UNMAP)
  4. Copy Offload (XCOPY)

Continue reading “ActiveCluster and VAAI”

Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware

I posted shortly after ESXi 6.0 came out a while back explaining how to do in-guest UNMAP with Windows. See the original post here:

Direct Guest OS UNMAP in vSphere 6.0

The high-level workflow if you don’t want to read the post is:

  1. You delete a file in Windows
  2. Run Disk Optimizer to reclaim the space
  3. Windows issues UNMAP to the filesystem
  4. ESXi shrinks the virtual disk
  5. If EnableBlockDelete is enabled on the ESXi hosts, ESXi will issue UNMAP to reclaim the space on the array

This had a few requirements:

  • ESXi 6.0+
  • VM hardware version 11+
  • Thin virtual disk
  • CBT cannot be enabled (though this restriction is removed in ESXi 6.5 see this post)

Continue reading “Allocation Unit Size and Automatic Windows In-Guest UNMAP on VMware”

VAAI XCOPY not being used with Powered-On Windows VM

This is an issue I discovered along with my good friend and former colleague Drew Tonnesen a few years back which has cropped up a few times in recent days. I noticed there wasn’t really any information about it online, so made sense to put a quick post together.

In short, Windows 2012 R2 virtual machine clone or Storage vMotion operations complete much slower when powered-on as compared to when power-off. The common explanation is that VAAI XCOPY does not work when the VM is powered-on. This is not exactly true. Let me explain. Continue reading “VAAI XCOPY not being used with Powered-On Windows VM”

FlashArray and VMware documentation update for vSphere 6

I have completed updates for two of my main VMware vSphere documents for the Pure Storage FlashArray. These include the standard best practices document and the white paper explaining VAAI in detail and how it works on the FlashArray.

vmwarebpvaai

 

 

 

 

 

 

 

 

Best Practices Document Link

VAAI White Paper Link

The best practices document has mainly been updated with information that this blog has shown in the past couple of months. Notably:

  • vSphere 6 updates, support for Web Client Plugin versions, changes in virtual disk recommendations, in-guest UNMAP support, etc
  • VMFS UNMAP changes when it comes to best practice recommendations
  • vRealize Operations Management Pack
  • EFI-enabled VMs and Disk.DiskMaxIOSize

In the VAAI document, it is a similar update:

  • vSphere 6 changes, mainly focused on the thin virtual disk XCOPY enhancements
  • UNMAP changes, block counts, performance and in-guest support (EnableBlockDelete)

Both documents are also updated for FlashArray//m, but it is mainly a cosmetic change as nothing really changes for the VMware environment, no recommendations are changed. Of course the documents are also cleaned up and re-arranged to be more reader friendly with a semi-new format as well.

Important! If you have old versions of these documents, delete them! These get updated frequently (a few times a year at least) and these changes can be important. When needing to refer to the guides, please check back to the Pure Storage community for the latest version.

Enjoy! As always feedback on these documents is ALWAYS welcome.

 

UNMAP Block Count Behavior Change in ESXi 5.5 P3+

I recently was doing some troubleshooting for a customer that was using my UNMAP PowerCLI script and discovered a change in ESXi 5.5+ UNMAP. The issue was that the script was taking quite a while to complete. After some logic optimizations and increasing timeouts the script was sped up a bit and less timeout errors occurred, but a bunch of the UNMAP operations were still taking a lot longer than expected. Eventually we threw our hands up and said it was good enough. A bit more recently, I was testing a 3rd party UNMAP tool and ran into similar behavior so I dug into it a bit more and found some semi-unexpected changes in how UNMAP works, specifically the behavior when leveraging non-default block iteration counts. Continue reading “UNMAP Block Count Behavior Change in ESXi 5.5 P3+”

FlashArray XCOPY Data Reduction Reporting Enhancement

Recently the Purity Operating Environment 4.1.1 release  came out with quite a few enhancements. Many of these were for replication, certain new GUI features and the new vSphere Web Client Plugin is included. What I wanted to talk about here is a space reporting enhancement that was made concerning VAAI XCOPY (Full Copy). First some history…

First off a quick refresher on XCOPY. XCOPY is a VAAI feature that provides for offloading virtual disk copying/migration inside of one array. So operations like Storage vMotion, Cloning or Deploy from Template. Telling an array to move something from location A to location B is much faster than having ESXi issue tons of reads and writes over the SAN and it also therefore reduces CPU overhead on the ESXi host and reduces traffic on the SAN. Faster cloning/migration and less overhead–yay! This lets ESXi focus on what it does best: manage and run virtual machines while letting the array do what it does best: manage and move around data. Continue reading “FlashArray XCOPY Data Reduction Reporting Enhancement”

Pure Storage and VMware VAAI

Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.

I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.


vaai_pdf_cover

Continue reading “Pure Storage and VMware VAAI”

Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

A bit of a long rambling one here. Here it goes…

Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.

To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):

  • Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
  • Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
  • Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.

Continue reading “Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types”

VMware Dead Space Reclamation (UNMAP) and Pure Storage

One of the main things I have been doing in my first few weeks at Pure Storage (which has been nothing but awesome so far by the way) is going through all of our VMware best practices and integration points. Testing them, seeing how they work or can they be improved etc. The latest thing I looked into was Dead Space Reclamation (which from here on out I will just refer to as UNMAP) with the Pure Storage FlashArray and specifically ESXi 5.5. This is a pretty straight forward process but I did find something interesting that is worth noting.

405 front

Continue reading “VMware Dead Space Reclamation (UNMAP) and Pure Storage”