Tag Archives: esxi

Upgrading ESXi environment with PowerCLI

A new ESXi 6.5 patch came out today:

https://kb.vmware.com/s/article/2151104

And I wanted to upgrade my whole lab environment to it and I haven’t set up auto-deploy or update manager yet (I plan to, making all of this much easier to manage). So I wrote a quick and dirty PowerCLI script that updates to the latest patch and if the host doesn’t have any VMs on it, puts it into maintenance mode and reboots it. I will reboot the other ones as needed.

So short, not really even worth throwing on GitHub, but I might make it cleaner, and smarter at some point and put it there. Continue reading Upgrading ESXi environment with PowerCLI

Moving from an RDM to a VVol

Migrating VMDKs or virtual mode RDMs to VVols is easy: Storage vMotion. No downtime, no pre-creating of volumes. Simple and fast. But physical mode RDMs are a bit different.

As we all begrudgingly admit there are still more than a few Raw Device Mappings out there in VMware environments. Two primary use cases:

  • Microsoft Clustering. Virtual disks can only be used for Failover Clustering if all of the VMs are on the same ESXi hosts which feels a bit like defeating the purpose. So most opt for RDMs so they can split the VMs up.
  • Physical to virtual. Sharing copies of data between physical and virtual or some other hypervisor is the most common reason I see these days. Mostly around database dev/test scenarios. The concept of a VMDK can keep your data from being easily shared, so RDMs provide a workaround.

Continue reading Moving from an RDM to a VVol

Do thin VVols perform better than thin VMDKs?

Yes. Any questions?

Ahem, I suppose I will prove it out. The real answer is, well maybe. Depends on the array.

So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin. And therefore many users opted to not use thin virtual disks because of it.

So first off, why the difference?

Continue reading Do thin VVols perform better than thin VMDKs?

Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6

NMP Multipathing rules for the FlashArray are now default

As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.

Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading NMP Multipathing rules for the FlashArray are now default

Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

Sorry the title is a bit of a mouthful.

I have written some posts on iSCSI in the past, around setup:

Setting up iSCSI with VMware ESXi and the FlashArray

Configuring iSCSI CHAP in VMware with the FlashArray

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

These have been on various parts, but primarily the setup around standard vSwitches, which generally, in at least in larger environments, is not the norm. Distributed vSwitches are. I have seen a few posts on doing this with the old C# client, but not the vSphere Web Client. Reference those posts here:

http://everything-virtual.com/installing-the-home-lab/installing-the-home-lab-creating-and-configuring-an-iscsi-distributed-switch-for-vmware-multipathing/

https://www.yelof.com/2011/07/13/dr-iscsi-or-how-i-learned-to-stop-worrying-and-love-virtual-distributed-switches-on-vsphere-v5/

So with the amount of questions I have received on it, it is probably worth putting pen to paper on it. Nothing profound here, basically a walkthrough.

This is of course assuming you are doing port binding. If you are not, then just the standard software iSCSI setup (as described in the above 1st post) is needed.

Continue reading Setting up Software iSCSI Multipathing with Distributed vSwitches with the vSphere Web Client

ESXi and the Missing LUNs: 256 or Higher

A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.

The TLDR is ESXi does not support LUN IDs above 255 for your average device.

*It’s not actually aliens, it is perfectly normal SCSI you silly man.

Continue reading ESXi and the Missing LUNs: 256 or Higher

Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading Documentation Update, Best Practices and vRealize

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

I posted about In-Guest UNMAP with Linux VMs in this post:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux