Virtual Volumes change quite a lot of things. One of these is how your storage volumes are actually connected. This change is necessitated for two reasons:
- Scale. Traditional ESXi SCSI limits how many SCSI devices can be seen at once. 256 in 6.0 and earlier and 512 in 6.5. This is still not enough when every virtual disk its own volume.
- Performance. Virtual Volumes are provisioned and de-provisioned and moved and accessed constantly. If every time one of these operations occurred a SC SI rescan was required, we would see rescan storms unlike this world has ever witnessed.
So VMware changed how this is done. Continue reading Virtual Volumes: VVol Bindings Explained
Virtual Volumes provide a great many benefits, some large, some small. Depending on the VM, recovering a deleted VM could be either of those.
With traditional VMFS, once you have selected “delete from disk” restoring that VM could have been a process. Either restoring from backup or hoping you had a snapshot of the VMFS on the array. Either way, you are probably going to incur data loss, as the last backup or snapshot is unlikely to be from the time right before the deletion.
Let me be VERY clear here. Regardless to the rest of this post, I am not saying once you move to VVols you do not need backup! You absolutely still do. VVols just give you a nice way to do an immediate recovery of the latest point-in-time without having to lose anything, assuming your array support it.
“Wait, did you say delete VM “AD” or VM “80”?”
“Um… definitely not AD that’s our active directory…”
Continue reading Recovering a Deleted Virtual Machine with VVols
With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.
So you do find yourself wondering, did it actually reclaim anything?
Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?
Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi
EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.
The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6
As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.
Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading NMP Multipathing rules for the FlashArray are now default
Another how-to post on iSCSI. Essentially another “for the good of the order post” here. iSCSI is becoming increasingly common, so figured I would put a post together that covers the ins and outs of port binding with standard vSwitches.
For information on distributed switches (which I highly recommend using over standard vSwitches) check out this post here:
Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client
So on to Standard vSwitches. Continue reading Setting up iSCSI Port Binding with Standard vSwitches in the vSphere Web Client
Sorry the title is a bit of a mouthful.
I have written some posts on iSCSI in the past, around setup:
Setting up iSCSI with VMware ESXi and the FlashArray
Configuring iSCSI CHAP in VMware with the FlashArray
Another look at ESXi iSCSI Multipathing (or a Lack Thereof)
These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:
So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.
This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.
Continue reading Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client
Quick post. I updated my PowerShell GUI tool I maintain for VMware and FlashArray management and added some new features. This time mainly around protection group management. Download it from my GitHub page here:
Continue reading VMware and FlashArray PowerShell GUI tool v2.7
I posted a few months back about ESXi queue depth limits and how it affects performance. Just recently, Pure Storage announced our upcoming support for vSphere Virtual Volumes. So, this begs the question, what changes with VVols when it comes to queuing? In a certain view, a lot. But conceptually, actually very little. Let’s dig into this a bit more.
Continue reading Queue Depth Limits and VVol Protocol Endpoints
This is a blog post I have been waiting to write for quite some time. I cannot even remember exactly how long ago I saw Satyam Vaghani present on this as a concept at VMworld. Back when the concept of what is now called a protocol endpoint (more on that later) was called an I/O Demultiplexer. A mouthful for sure. Finally it’s time! With pleasure, I’d like to introduce VVols on the FlashArray!
Continue reading Introducing vSphere Virtual Volumes on the FlashArray