Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading “Monitoring Automatic VMFS-6 UNMAP in ESXi”

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading “In-Guest UNMAP, EnableBlockDelete and VMFS-6”

In-Guest UNMAP and VMware Snapshots

Here we go with another in-guest UNMAP post. See other posts here:

https://www.codyhosterman.com/pure-storage-vmware-overview/flasharray-and-vmware-best-practices/space-reclamationunmap/

I was asked the following question the other day “does in-guest UNMAP work when snapshots exist?” To save you a long read: it does not work. But if you are interested in the details and my testing, read on.

My initial answer was “no” but I thought about some changes in VMFS-6 and reconsidered. If you refer to the vSphere 6.5 documentation you can see this change for VMFS 6:

“SEsparse is a default format for all delta disks on the VMFS6 datastores. On VMFS5, SEsparse is used for virtual disks of the size 2 TB and larger” Continue reading “In-Guest UNMAP and VMware Snapshots”

NMP Multipathing rules for the FlashArray are now default

As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.

Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading “NMP Multipathing rules for the FlashArray are now default”

Setting up iSCSI Port Binding with Standard vSwitches in the vSphere Web Client

Another how-to post on iSCSI. Essentially another “for the good of the order post” here. iSCSI is becoming increasingly common, so figured I would put a post together that covers the ins and outs of port binding with standard vSwitches.

For information on distributed switches (which I highly recommend using over standard vSwitches) check out this post here:

Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

So on to Standard vSwitches. Continue reading “Setting up iSCSI Port Binding with Standard vSwitches in the vSphere Web Client”

Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

Sorry the title is a bit of a mouthful.

I have written some posts on iSCSI in the past, around setup:

Setting up iSCSI with VMware ESXi and the FlashArray

Configuring iSCSI CHAP in VMware with the FlashArray

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:

http://everything-virtual.com/installing-the-home-lab/installing-the-home-lab-creating-and-configuring-an-iscsi-distributed-switch-for-vmware-multipathing/

https://www.yelof.com/2011/07/13/dr-iscsi-or-how-i-learned-to-stop-worrying-and-love-virtual-distributed-switches-on-vsphere-v5/

So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.

This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.

Continue reading “Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client”

Join the Pure Storage Code() Slack Team

Hey–we just launched our Pure Storage Code() Slack team at code-purestorage.slack.com. Along with code.purestorage.com, our web repository pointing to our various GitHub pages.

Join to get help or contribute help around PowerShell, Python, vRealize, REST, etc. Just getting started with all of this so hop aboard and build this community with us!

To register for the Slack channel, please check out this Pure Storage community post:

https://codeinvite.purestorage.com/

Or check out my co-workers (Barkz) he has more info in his post:

http://www.purepowershellguy.com/?p=13983