Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF

Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.

Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.

Create the Volume

From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.

In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.

Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.

On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.

Connect The Volume To the Appropriate Host Group

Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.

Next, click the three vertical dots (I think this is called a hamburger) and select Connect.

For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.

Create The NVMe-oF Datastore

Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.

(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.

Select (1) VMFS (currently vVols is unsupported by VMware with NVMe-oF) and click (2) Next.

Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.

Select (1) VMFS 6 (who uses 5 anymore anyways?!) and click (2) Next.

Click (1) Next.

Review the details and click (1) Finish.

Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.

Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)

Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.

Select (1) Change storage only from the window that pops up and click (2) Next.

Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.

Finally, verify the details of the migration and click (1) Finish.

And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.

What’s New in Purity 5.1: WRITE SAME Handling Improvement

In Purity 5.1 there were a variety of new features introduced on the FlashArray like CloudSnap to NFS or volume throughput limits, but there were also a variety of internal enhancements. I’d like to start this series with one of them.

VAAI (VMware API for Array Integration) includes a variety of offloads that allow the underlying array to do certain storage-related tasks better (either faster, more efficiently, etc.) than ESXi can do them. One of these offloads is called Block Zero, which leverages the SCSI command called WRITE SAME. WRITE SAME is basically a SCSI operation that tells the storage to write a certain pattern, in this case zeros. So instead of ESXi issuing possibly terabytes of zeros, ESXi just issues a few hundred or thousand small WRITE SAME I/Os and the array takes care of the zeroing. This greatly speeds up the process and also significantly reduces the impact on the SAN.

WRITE SAME is used in quite a few places, but the most commonly encountered scenarios are:

What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP

vSphere 6.7 core storage “what’s new” series:

VMware has continued to improve and refine automatic UNMAP in vSphere 6.7. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted.

Continue reading “What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP”

VMware Capacity Reporting Part V: VVols and UNMAP

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts.

Continue reading “VMware Capacity Reporting Part V: VVols and UNMAP”

ActiveCluster and VAAI

Pure Storage recently offered up support for active/active replication for the FlashArray in a feature called ActiveCluster. And a common question that comes up for active/active solutions alongside VMware is how is VAAI supported?

The reason it is asked is that it is often tricky if not impossible to support in an active/active scenario. Because the storage platform first has to perform the operation on one array, but also on the other. So XCOPY, which offloads VM copying, is often not supported. Let’s take a look at VAAI with ActiveCluster, specifically these four features:

  1. Hardware Assisted Locking (ATOMIC TEST & SET)
  2. Zero Offload (WRITE SAME)
  3. Space Reclamation (UNMAP)
  4. Copy Offload (XCOPY)

Continue reading “ActiveCluster and VAAI”

Monitoring Automatic VMFS-6 UNMAP in ESXi

With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.

So you do find yourself wondering, did it actually reclaim anything?

Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?

Continue reading “Monitoring Automatic VMFS-6 UNMAP in ESXi”

In-Guest UNMAP, EnableBlockDelete and VMFS-6

EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.

The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading “In-Guest UNMAP, EnableBlockDelete and VMFS-6”

Unattended VMFS UNMAP Script

I updated my UNMAP PowerCLI script a month or so ago and improved quite a few things–but I did remove hard-coded variables and replaced it with interactive input. Which is fine for some, but for many it was not.

Note: Move to VMFS-6 in vSphere 6.5 and you don’t have to worry about this UNMAP business anymore 🙂

Essentially, quite a few people want to run it as a scheduled task in Windows, and if it requires input that just isn’t going to work out of the box. So I have created an unattended version of the script. For details read on.

Note: I will continue to update the script (bugs, features, etc.) but will note them on my other blog post about the script here:

Pure Storage FlashArray UNMAP PowerCLI Script for VMware ESXi

I will only update this post if the unattended version changes in a way that makes these instructions wrong. Continue reading “Unattended VMFS UNMAP Script”

In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux

This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

I posted about In-Guest UNMAP with Linux VMs in this post:

What’s new in ESXi 6.5 Storage Part I: UNMAP

One thing you can note is that automatic UNMAP works quite well, but manual UNMAP, like fstrim did not. So let’s revisit fstrim now that this patch is out. Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part II: Linux”

In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows

As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.

First off, some official information:

Release notes:

https://kb.vmware.com/kb/2148989

Manual patch download:

https://my.vmware.com/group/vmware/patch#search

Or you can run esxcli if you ESXi host has internet access to download and install automatically:

esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Continue reading “In-Guest UNMAP Fix in ESXi 6.5 Part I: Windows”