Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

A bit of a long rambling one here. Here it goes…

Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.

To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):

  • Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
  • Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
  • Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.

Continue reading “Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types”

Deeper dive on vSphere UNMAP block count with Pure Storage

I posted a week or so ago about the ESXCLI UNMAP process with vSphere 5.5 on the Pure Storage FlashArray here and came up with the conclusion that larger block counts are highly beneficial to the UNMAP process. So the recommendation was simply use a larger block count than the default to speed up the UNMAP operation, something sufficiently higher than the default of 200 MB. I received a few questions about a more specific recommendation (and had some myself) so I decided to dive into this a little deeper to see if I could provide some guidance that was a little more concrete. In the end a large block count is perfectly fine–if you want to know more details–read on!

unmapimage

Continue reading “Deeper dive on vSphere UNMAP block count with Pure Storage”

VMware Dead Space Reclamation (UNMAP) and Pure Storage

One of the main things I have been doing in my first few weeks at Pure Storage (which has been nothing but awesome so far by the way) is going through all of our VMware best practices and integration points. Testing them, seeing how they work or can they be improved etc. The latest thing I looked into was Dead Space Reclamation (which from here on out I will just refer to as UNMAP) with the Pure Storage FlashArray and specifically ESXi 5.5. This is a pretty straight forward process but I did find something interesting that is worth noting.

405 front

Continue reading “VMware Dead Space Reclamation (UNMAP) and Pure Storage”

Purity 4.0 Release: New hardware models, replication and more!

Ah my first official post during my tenure at Pure and it couldn’t have happened at a better time! Just in time for the Purity 4.0 release which we just announced today. While there are plenty of under-the-cover enhancements I am going to focus on the two biggest parts of the release: new hardware and replication. There are other features such as  for example hardware security token locking but I am not going to go into those in this post. So first let’s talk about the advancement in hardware!

newhardware

Continue reading “Purity 4.0 Release: New hardware models, replication and more!”

Changing the default VMware Round Robin IO Operation Limit value for Pure Storage FlashArray devices

This is a topic I have posted about in the past but this time I am going to speak about it with the Pure Storage FlashArray. Anyone familiar with the VMware Native Multipathing Plugin probably knows about the Round Robin “IOPS” value which I will interchangeably also refer to as the IO Operation Limit. This value dictates how often NMP switches paths to the device–after a configured number of I/Os NMP will move to a different path. The default value of this is 1,000 but can be changed to as low as 1. For the highest performance Pure recommends changing this setting to 1 for all devices. The tricky thing is that it has to be done for every device on every host and doing this in a simple way isn’t immediately obvious. But here is the procedure.

flasharray

 

Continue reading “Changing the default VMware Round Robin IO Operation Limit value for Pure Storage FlashArray devices”