All posts by codyhosterman

Comparing VVols to VMDKs and RDMs

I have been talking a lot about Virtual Volumes (VVols) lately with customers and when I describe what they are a frequent response is “oh so basically RDMs then?”. ..

…ugh sorry I just threw up in my mouth a bit…

The answer to that is an unequivocal “no” of course, but the question deserves a thorough response.

So first let’s look at how they are the same, then let’s look at their differences. And not just how they compare to RDMs, but also VMDKs as you traditionally know them.

Continue reading Comparing VVols to VMDKs and RDMs

My Upcoming VMUG Webinars

Hey all–in case you didn’t get a chance to go to VMworld and would like to see my sessions live (well online live) and ask questions I am repeating them somewhat via the VMUG webinars.

First one is on October 9th:

This is a deep dive on the love story that is ESXi and UNMAP. How it used to work, how it works now, how it has changed, and why. Plus how VVols changes it all. This is a session I submitted to VMworld, but it didn’t make it. Register here:

https://event.on24.com/wcc/r/1498012/83A172D29B748E7581279708C1640449

Second one is a repeat from VMworld, my session on best practicesfor vSphere for All Flash Arrays. Continue reading My Upcoming VMUG Webinars

Virtual Volumes: VVol Bindings Explained

Virtual Volumes change quite a lot of things. One of these is how your storage volumes are actually connected. This change is necessitated for two reasons:

  • Scale. Traditional ESXi SCSI limits how many SCSI devices can be seen at once. 256 in 6.0 and earlier and 512 in 6.5. This is still not enough when every virtual disk its own volume.
  • Performance. Virtual Volumes are provisioned and de-provisioned and moved and accessed constantly. If every time one of these operations occurred a SC SI rescan was required, we would see rescan storms unlike this world has ever witnessed.

So VMware changed how this is done. Continue reading Virtual Volumes: VVol Bindings Explained

VMworld 2017 Session Wrap-Up

I’m back from VMworld 2017 US and then VMworld Europe and a nice vacation. Time to get back to work and of course some (well a lot hopefully) blogging.

Had a great time catching up with friends at VMworld and talking about the new stuff that both Pure Storage and VMware have coming. I had quite a few sessions this year–VMware was kind enough to post online–many of them publically (no login required).

Continue reading VMworld 2017 Session Wrap-Up

Recovering a Deleted Virtual Machine with VVols

Virtual Volumes provide a great many benefits, some large, some small. Depending on the VM, recovering a deleted VM could be either of those.

With traditional VMFS, once you have selected “delete from disk” restoring that VM could have been a process. Either restoring from backup or hoping you had a snapshot of the VMFS on the array. Either way, you are probably going to incur data loss, as the last backup or snapshot is unlikely to be from the time right before the deletion.

Let me be VERY clear here. Regardless to the rest of this post, I am not saying once you move to VVols you do not need backup! You absolutely still do. VVols just give you a nice way to do an immediate recovery of the latest point-in-time without having to lose anything, assuming your array support it.

“Wait, did you say delete VM “AD” or VM “80”?”

“Um… definitely not AD that’s our active directory…”

Continue reading Recovering a Deleted Virtual Machine with VVols

Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6

In-Guest UNMAP and VMware Snapshots

Here we go with another in-guest UNMAP post. See other posts here:

https://www.codyhosterman.com/pure-storage-vmware-overview/flasharray-and-vmware-best-practices/space-reclamationunmap/

I was asked the following question the other day “does in-guest UNMAP work when snapshots exist?” To save you a long read: it does not work. But if you are interested in the details and my testing, read on.

My initial answer was “no” but I thought about some changes in VMFS-6 and reconsidered. If you refer to the vSphere 6.5 documentation you can see this change for VMFS 6:

“SEsparse is a default format for all delta disks on the VMFS6 datastores. On VMFS5, SEsparse is used for virtual disks of the size 2 TB and larger” Continue reading In-Guest UNMAP and VMware Snapshots

NMP Multipathing rules for the FlashArray are now default

As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.

Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading NMP Multipathing rules for the FlashArray are now default

Setting up iSCSI Port Binding with Standard vSwitches in the vSphere Web Client

Another how-to post on iSCSI. Essentially another “for the good of the order post” here. iSCSI is becoming increasingly common, so figured I would put a post together that covers the ins and outs of port binding with standard vSwitches.

For information on distributed switches (which I highly recommend using over standard vSwitches) check out this post here:

Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

So on to Standard vSwitches. Continue reading Setting up iSCSI Port Binding with Standard vSwitches in the vSphere Web Client