Direct Guest OS UNMAP in vSphere 6.0

This is certainly not my first post about UNMAP and I am pretty sure it will not be my last, but I think this is one of the more interesting updates of late. vSphere 6.0 has a new feature that supports the ability for direct UNMAP operations from inside a virtual machine issued from a Guest OS. Importantly this is now supported using a virtual disk instead of the traditional requirement of a raw device mapping.

Continue reading “Direct Guest OS UNMAP in vSphere 6.0”

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

I jumped on a call the other day to talk about iSCSI setup for a new FlashArray and the main reason for the discussion had to do with co-existence of a pre-existing array from another vendor. They were following my blog post on iSCSI setup and things didn’t quite match up.

To setup multi-pathing (the recommended way) for Software iSCSI is to configure more than one vmkernel port that each have exactly one active host adapter (physical NIC). You then add those vmkernel ports to the iSCSI software adapter and the iSCSI adapter will then use those specific NICs for I/O transmission and load-balance across those ports.

Continue reading “Another look at ESXi iSCSI Multipathing (or a Lack Thereof)”

Setting up iSCSI with VMware ESXi and the FlashArray

I’ve been with Pure Storage for about ten months (time flies!) and a noticeable trend I’ve seen in the past six or so months is in the number of customers who are deciding to use iSCSI as their storage protocol of choice. This is increasingly common in greenfield environments where they don’t want to invest in a Fibre Channel infrastructure. I’ve helped quite a few set this up in VMware environments so I thought I would put a post together on configuring ESXi software iSCSI with the Pure Storage FlashArray (I have yet to see a hardware iSCSI setup).

Before I begin, I highly recommend reading the following two documents from VMware:

http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

They are not long and provide very good insight into the how/what/why of iSCSI on VMware. Some of the images are a bit old, but the underlying concepts have not changed. Continue reading “Setting up iSCSI with VMware ESXi and the FlashArray”

ESXi IO Operations Limit Parameter and IO Balance

Quick post here. I am working on updating some documentation and I wanted to add a bit more color to a section on changing the IO Operations limit for ESXi NMP Round Robin devices. The Pure Storage recommendation is to change this value to one from the default of 1,000. Therefore, ESXi will switch logical paths after each I/O instead of 1,000. There are some performance benefits to this and some evidence for improved failover time (in the case of a path failure) with this setting. I am not going to get into the veracity of these benefits right now. What I wanted to share here is that there is no doubt changing this to 1 makes a big difference to I/O balance on the array itself. Continue reading “ESXi IO Operations Limit Parameter and IO Balance”

Reclaiming in-guest capacity with VMware and Pure Storage

UPDATE: In-guest UNMAP is now supported in a VM and sDelete and such is no longer required. Please refer to these posts:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

Direct Guest OS UNMAP in vSphere 6.0

Reclaiming “dirty” or “dead” space is a topic that goes by my desk quite often these days–since the FlashArray is a data reduction array it is especially important that space is not wasted on the array–throws off the economics etc. Therefore UNMAP is an important VAAI feature to leverage in any AFA environment. Supporting UNMAP is definitely table stakes for AFAs.

Note–I am doing to use the terms “dead”, “dirty” and “stranded” to define space that needs to be reclaimed interchangeably. So anyways…

Unfortunately UNMAP in its current form does not satisfy all of the reclamation use cases. UNMAP will only reclaim space on any array when capacity is cleared from the VMFS volume–so when a VM (or virtual disk) is deleted or migrated elsewhere. It does not have the ability to reclaim space when data is “deleted” inside a virtual machine by the guest OS when using virtual disks. VMware does not know this capacity has been cleared and neither can the array. So until this virtual disk is deleted or moved the capacity cannot be reclaimed with UNMAP. So to be clear, UNMAP with vmkfstools (in ESXi 5.0/5.1) or esxcli (in ESXi 5.5) does not allow you to reclaim space that remains stranded inside of virtual disks.

Continue reading “Reclaiming in-guest capacity with VMware and Pure Storage”

Deeper dive on vSphere UNMAP block count with Pure Storage

I posted a week or so ago about the ESXCLI UNMAP process with vSphere 5.5 on the Pure Storage FlashArray here and came up with the conclusion that larger block counts are highly beneficial to the UNMAP process. So the recommendation was simply use a larger block count than the default to speed up the UNMAP operation, something sufficiently higher than the default of 200 MB. I received a few questions about a more specific recommendation (and had some myself) so I decided to dive into this a little deeper to see if I could provide some guidance that was a little more concrete. In the end a large block count is perfectly fine–if you want to know more details–read on!

unmapimage

Continue reading “Deeper dive on vSphere UNMAP block count with Pure Storage”