What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…

leecorso

Continue reading “What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support”

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:

https://storagehub.vmware.com/#!/vsphere-core-storage/

I had the distinct pleasure of helping Cormac and Paudie with this paper. Thanks to both of them for including me and providing me with access to the engineers who wrote these features/enhancements!

So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.

Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:

hotextend60u2If the VM is turned on and I try to apply this configuration change, I get an error:
hotextendfail

So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:

hotextend65good

hotextendsuccess

Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.

 

 

Automatic VMFS expansion with vCenter SNMP and vRealize Orchestrator

Virtual disk oversubscription is becoming increasingly common and so is allowing people to provision their own VMs. So increasing a datastore capacity is also an increasingly common operation. Because of the performance of flash, merged with ESXi features like VAAI ATS. Expanding a VMFS is easy. Expanding a storage volume these days is easy. But you still have to actually do it. What if I want to automate the process to respond to datastore capacity threshold limits? There are a variety of ways to achieve this. Let’s look at it via vCenter SNMP alerts and vRealize Orchestrator workflows. Continue reading “Automatic VMFS expansion with vCenter SNMP and vRealize Orchestrator”

FlashArray VMware Best Practices PowerCLI Scripts

I wrote a post recently on the updates made to the PowerCLI 6.3 R1 esxcli implementation, so the logical next step was to implement this new behavior into my PowerCLI scripts that use esxcli. I still have a few scripts to update, but my two best practice-related scripts are ready to go. The two scripts are:

  1. Script to check and set best practices. Download here:
  2. Script to just check best practices, and lists issues in a report. Download here.

While I was updating them for esxcli changes, I figured i might as well improve them too, so there are quite a few changes for both. Let’s take a look.

Continue reading “FlashArray VMware Best Practices PowerCLI Scripts”

Recent ESXi 6 Storage Bugs and the FlashArray

As you might be aware, there have been a few storage-related issues with ESXi 6.0 as of late:

Accidental PDL during dropped paths:

Storage PDL responses may not trigger path failover in vSphere 6.0 (2144657)

Host issues during smartd inquiries:

Issuing a 0x85 SCSI Command from a VMware ESXi 6.0 host results in a PDL error (2133286)

The question that comes up for the Pure Storage FlashArray is are we susceptible? The short answer is no. Let’s explain why. Continue reading “Recent ESXi 6 Storage Bugs and the FlashArray”

Semi-transparent failover with VMFS and Active/Passive Replication

So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.

An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.

Continue reading “Semi-transparent failover with VMFS and Active/Passive Replication”

VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS

This is part 1 of this 7 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this. Let’s start with what an unresolved VMFS is and how to mount it.

The series being:

  1. Mounting an unresolved VMFS
  2. Why not force mount?
  3. Why might a VMFS resignature operation fail?
  4. How to correlate a VMFS and a FlashArray volume
  5. How to snapshot a VMFS on the FlashArray
  6. How to mount a VMFS FlashArray snapshot
  7. Restoring a single VM from a FlashArray snapshot

Continue reading “VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS”

UNMAP Block Count Behavior Change in ESXi 5.5 P3+

I recently was doing some troubleshooting for a customer that was using my UNMAP PowerCLI script and discovered a change in ESXi 5.5+ UNMAP. The issue was that the script was taking quite a while to complete. After some logic optimizations and increasing timeouts the script was sped up a bit and less timeout errors occurred, but a bunch of the UNMAP operations were still taking a lot longer than expected. Eventually we threw our hands up and said it was good enough. A bit more recently, I was testing a 3rd party UNMAP tool and ran into similar behavior so I dug into it a bit more and found some semi-unexpected changes in how UNMAP works, specifically the behavior when leveraging non-default block iteration counts. Continue reading “UNMAP Block Count Behavior Change in ESXi 5.5 P3+”

Add Storage Wizard Slowness and Unresolved VMFS Volumes

This week I received a question from a customer about some slowness in the vSphere “Add Storage” wizard they were seeing. This is a problem that has occurred over the years quite a few times for a variety of different reasons. VMware has fixed most of them, this latest reason luckily was known and has a relatively simple solution. An option called VMFS.UnresolvedVolumeLiveCheck.

option

Continue reading “Add Storage Wizard Slowness and Unresolved VMFS Volumes”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”