ESXi IO Operations Limit Parameter and IO Balance

Quick post here. I am working on updating some documentation and I wanted to add a bit more color to a section on changing the IO Operations limit for ESXi NMP Round Robin devices. The Pure Storage recommendation is to change this value to one from the default of 1,000. Therefore, ESXi will switch logical paths after each I/O instead of 1,000. There are some performance benefits to this and some evidence for improved failover time (in the case of a path failure) with this setting. I am not going to get into the veracity of these benefits right now. What I wanted to share here is that there is no doubt changing this to 1 makes a big difference to I/O balance on the array itself. Continue reading “ESXi IO Operations Limit Parameter and IO Balance”

FlashStack VMware vSphere Reference Architecture

Pure Storage announced last week our very first converged architecture offering appropriately named FlashStack. The initial release of FlashStack is built off of Cisco hardware (UCS of course) and the FlashArray. We have two reference architectures presently, one for VMware Horizon View and one for general purpose VMware vSphere environments (choose your own guest OSes). My colleague Ravi Venkat (@ravivenk) architected the View ref arch, while I focused on the general vSphere one. In this blog post I am going to overview what we did with the vSphere ref arch. For more information on either, refer to the respective reference architecture white papers at the usual place:

pure_storage_whitepaper_flashstack_horizonviewpure_storage_whitepaper_flashstack_vsphere

 

 

 

Continue reading “FlashStack VMware vSphere Reference Architecture”

Enhanced UNMAP script using with PowerCLI and RESTful API

***********UPDATE PLEASE REFER TO THE POST AT THIS LINK FOR UPDATED INFORMATION ON THIS SCRIPT***********************

The most common request I get for scripts here at Pure Storage is an UNMAP script using PowerCLI. I have a basic one here that does the trick–UNMAPs Pure Storage volumes in a vCenter. That being said it is pretty dumb–doesn’t tell you much about what happened other than what volumes it is reclaiming (or not reclaiming) and moves on. A few requests have come in recently for something a little more in-depth. Most notably the ability to see how much space has been reclaimed. This information cannot be gathered from the VMware side of things–it has to come from the FlashArray.

There are two options here–either use our REST APIs or use our PowerShell toolkit to get this information (which just wraps the REST calls). For this script I chose to use the REST API directly from within PowerShell. What this script does is:

  1. Connects to the vCenter and FlashArray
  2. Finds all of the datastores and counts how many are actually Pure Storage volumes (NAA comparison)
  3. Iterates through all of the datastores
  4. Skips it if it is not Pure
  5. If it is, the current data reduction ratio is reported and so the is current physical written capacity on the FlashArray.
  6. Runs UNMAP on the datastore
  7. Reports the new data reduction and physical space after UNMAP completes and how much was reclaimed.
  8. Repeats for the rest of the volumes.

The script reports all of this to the console window, but it always throws it in a log file through add-content. If you don’t want it to return the info to the console, simply delete the write-host lines. If you don’t want it to log, delete the add-content lines.

There are a few required parameters–vCenter information (IP, username, password), FlashArray info (IP, username, password), UNMAP block count and a log file location. These are hard-coded parameters, but that can easily be changed by altering it to a read-host.

You may also note that after each UNMAP the script sleeps for 60 seconds–I do this so I make sure the FlashArray has time to update its information right after the UNMAP. 60 seconds is VERY conservative–probably 10 or so is fine, so feel free to mess with that number if you don’t like waiting. I also have another sleep at the end of each datastore operation to give a quick chance to review the latest results before it starts spewing the next datastore information on the screen (note this update didn’t make it into the video demo below–it doesn’t wait after each datastore).

See the script in action below. Essentially I am deleting a bunch of VMs across 4 datastores and then running the UNMAP. You can see the space get reclaimed on the FlashArray.

Note: You need particular access (see a blog post about that here) to vCenter to run UNMAP. For the FlashArray only Read Only is needed (higher of course is fine too).

Get the script here:

https://github.com/codyhosterman/powercli/blob/master/unmapsdk.ps1

Reclaiming in-guest capacity with VMware and Pure Storage

UPDATE: In-guest UNMAP is now supported in a VM and sDelete and such is no longer required. Please refer to these posts:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

Direct Guest OS UNMAP in vSphere 6.0

Reclaiming “dirty” or “dead” space is a topic that goes by my desk quite often these days–since the FlashArray is a data reduction array it is especially important that space is not wasted on the array–throws off the economics etc. Therefore UNMAP is an important VAAI feature to leverage in any AFA environment. Supporting UNMAP is definitely table stakes for AFAs.

Note–I am doing to use the terms “dead”, “dirty” and “stranded” to define space that needs to be reclaimed interchangeably. So anyways…

Unfortunately UNMAP in its current form does not satisfy all of the reclamation use cases. UNMAP will only reclaim space on any array when capacity is cleared from the VMFS volume–so when a VM (or virtual disk) is deleted or migrated elsewhere. It does not have the ability to reclaim space when data is “deleted” inside a virtual machine by the guest OS when using virtual disks. VMware does not know this capacity has been cleared and neither can the array. So until this virtual disk is deleted or moved the capacity cannot be reclaimed with UNMAP. So to be clear, UNMAP with vmkfstools (in ESXi 5.0/5.1) or esxcli (in ESXi 5.5) does not allow you to reclaim space that remains stranded inside of virtual disks.

Continue reading “Reclaiming in-guest capacity with VMware and Pure Storage”

Pure Storage Web Client Plugin version 1.1.13

I have done a few posts on here that involve the Pure Storage Plugin for the vSphere Web Client (here and here) since I joined. Well here is another. We just released a new version of the Web Client Plugin (I am going to refer to it as WCP for the rest of this post because I am a lazy typist). We bundle the WCP into Purity and therefore the WCP is installed, updated and uninstalled from our GUI/CLI to vCenter (yes we do also offer a mechanism to update it outside of upgrading Purity itself). Our latest release of Purity, 4.0.12, includes WCP version 1.1.13–while there is no new functionality there are two important fixes.

webclient_intro

Continue reading “Pure Storage Web Client Plugin version 1.1.13”

The Pure Storage Content Pack 1.0 for VMware vCenter Log Insight

The Pure Storage Content Pack for VMware vCenter Log Insight is now live on the VMware Solution Exchange! Download it today for free. As past posts have shown I have done a decent amount of work with Log Insight here at Pure and in my previous job. A product I have really liked from VMware for a variety of reasons, a big one being that it is so very easy to use. We really improved our syslog feature on the FlashArray in the 4.0 Purity release, so it was the perfect time to create our first content pack!

purestoragecontentpack

Continue reading “The Pure Storage Content Pack 1.0 for VMware vCenter Log Insight”

Pure Storage Plugin for the vSphere Web Client Firewall Requirements

This is a question that has come up quite often and I have blogged about this for several different products in the past. What Firewall rules do I need to create to install and use the Pure Storage Plugin for the vSphere Web Client? Luckily this is fairly simple. For instructions on using and installing the Web Client plugin check out these posts here and here.

When you go to install the plugin from the array GUI and you see the following error it could very well be a network error:

firewall error

Continue reading “Pure Storage Plugin for the vSphere Web Client Firewall Requirements”

Pure Storage and VMware VAAI

Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.

I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.


vaai_pdf_cover

Continue reading “Pure Storage and VMware VAAI”

Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

A bit of a long rambling one here. Here it goes…

Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.

To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):

  • Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
  • Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
  • Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.

Continue reading “Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types”

PowerShell and the Pure Storage FlashArray CLI

Scripting is a wonderful thing–saves me tons of time. PowerShell is no exception. VMware offers a very robust PowerShell cmdlet offering (called PowerCLI) which allows you to do essentially anything you can think of in vSphere. Of course this is all specific to VMware or Windows. What about including scripting commands for Pure Storage into PowerShell (PowerCLI) scripts? It is actually pretty simple using the readily available SSH plugin for PowerShell.

flasharray

Continue reading “PowerShell and the Pure Storage FlashArray CLI”