Tag Archives: flasharray

Reclaiming in-guest capacity with VMware and Pure Storage

UPDATE: In-guest UNMAP is now supported in a VM and sDelete and such is no longer required. Please refer to these posts:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

Direct Guest OS UNMAP in vSphere 6.0

Reclaiming “dirty” or “dead” space is a topic that goes by my desk quite often these days–since the FlashArray is a data reduction array it is especially important that space is not wasted on the array–throws off the economics etc. Therefore UNMAP is an important VAAI feature to leverage in any AFA environment. Supporting UNMAP is definitely table stakes for AFAs.

Note–I am doing to use the terms “dead”, “dirty” and “stranded” to define space that needs to be reclaimed interchangeably. So anyways…

Unfortunately UNMAP in its current form does not satisfy all of the reclamation use cases. UNMAP will only reclaim space on any array when capacity is cleared from the VMFS volume–so when a VM (or virtual disk) is deleted or migrated elsewhere. It does not have the ability to reclaim space when data is “deleted” inside a virtual machine by the guest OS when using virtual disks. VMware does not know this capacity has been cleared and neither can the array. So until this virtual disk is deleted or moved the capacity cannot be reclaimed with UNMAP. So to be clear, UNMAP with vmkfstools (in ESXi 5.0/5.1) or esxcli (in ESXi 5.5) does not allow you to reclaim space that remains stranded inside of virtual disks.

Continue reading Reclaiming in-guest capacity with VMware and Pure Storage

The Pure Storage Content Pack 1.0 for VMware vCenter Log Insight

The Pure Storage Content Pack for VMware vCenter Log Insight is now live on the VMware Solution Exchange! Download it today for free. As past posts have shown I have done a decent amount of work with Log Insight here at Pure and in my previous job. A product I have really liked from VMware for a variety of reasons, a big one being that it is so very easy to use. We really improved our syslog feature on the FlashArray in the 4.0 Purity release, so it was the perfect time to create our first content pack!

purestoragecontentpack

Continue reading The Pure Storage Content Pack 1.0 for VMware vCenter Log Insight

Pure Storage and VMware VAAI

Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.

I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.


vaai_pdf_cover

Continue reading Pure Storage and VMware VAAI

Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

A bit of a long rambling one here. Here it goes…

Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.

To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):

  • Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
  • Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
  • Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.

Continue reading Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

Deeper dive on vSphere UNMAP block count with Pure Storage

I posted a week or so ago about the ESXCLI UNMAP process with vSphere 5.5 on the Pure Storage FlashArray here and came up with the conclusion that larger block counts are highly beneficial to the UNMAP process. So the recommendation was simply use a larger block count than the default to speed up the UNMAP operation, something sufficiently higher than the default of 200 MB. I received a few questions about a more specific recommendation (and had some myself) so I decided to dive into this a little deeper to see if I could provide some guidance that was a little more concrete. In the end a large block count is perfectly fine–if you want to know more details–read on!

unmapimage

Continue reading Deeper dive on vSphere UNMAP block count with Pure Storage

VMware Dead Space Reclamation (UNMAP) and Pure Storage

One of the main things I have been doing in my first few weeks at Pure Storage (which has been nothing but awesome so far by the way) is going through all of our VMware best practices and integration points. Testing them, seeing how they work or can they be improved etc. The latest thing I looked into was Dead Space Reclamation (which from here on out I will just refer to as UNMAP) with the Pure Storage FlashArray and specifically ESXi 5.5. This is a pretty straight forward process but I did find something interesting that is worth noting.

405 front

Continue reading VMware Dead Space Reclamation (UNMAP) and Pure Storage

Purity 4.0 Release: New hardware models, replication and more!

Ah my first official post during my tenure at Pure and it couldn’t have happened at a better time! Just in time for the Purity 4.0 release which we just announced today. While there are plenty of under-the-cover enhancements I am going to focus on the two biggest parts of the release: new hardware and replication. There are other features such as  for example hardware security token locking but I am not going to go into those in this post. So first let’s talk about the advancement in hardware!

newhardware

Continue reading Purity 4.0 Release: New hardware models, replication and more!

Changing the default VMware Round Robin IO Operation Limit value for Pure Storage FlashArray devices

This is a topic I have posted about in the past but this time I am going to speak about it with the Pure Storage FlashArray. Anyone familiar with the VMware Native Multipathing Plugin probably knows about the Round Robin “IOPS” value which I will interchangeably also refer to as the IO Operation Limit. This value dictates how often NMP switches paths to the device–after a configured number of I/Os NMP will move to a different path. The default value of this is 1,000 but can be changed to as low as 1. For the highest performance Pure recommends changing this setting to 1 for all devices. The tricky thing is that it has to be done for every device on every host and doing this in a simple way isn’t immediately obvious. But here is the procedure.

flasharray

 

Continue reading Changing the default VMware Round Robin IO Operation Limit value for Pure Storage FlashArray devices