Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF

Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.

Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.

Create the Volume

From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.

In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.

Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.

On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.

Connect The Volume To the Appropriate Host Group

Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.

Next, click the three vertical dots (I think this is called a hamburger) and select Connect.

For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.

Create The NVMe-oF Datastore

Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.

(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.

Select (1) VMFS (currently vVols is unsupported by VMware with NVMe-oF) and click (2) Next.

Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.

Select (1) VMFS 6 (who uses 5 anymore anyways?!) and click (2) Next.

Click (1) Next.

Review the details and click (1) Finish.

Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.

Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)

Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.

Select (1) Change storage only from the window that pops up and click (2) Next.

Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.

Finally, verify the details of the migration and click (1) Finish.

And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.

Pure Storage vSphere Remote Plugin™ 5.1.0 launch: vVol Point-in-Time Recovery

We are excited to announce the launch of the latest version of Pure Storage’s remote vSphere plugin, 5.1.0. It includes a number of bug fixes PLUS a highly sought after feature: vVols VM point-in-time (PiT) recovery!

Why am I excited about this feature?

With vVol PiT VM recovery, you can now easily recover an entire VM that was accidentally deleted (and eradicated) or you can restore the state of a VM back to a point in time that you took a snapshot from vCenter directly while using Pure’s vSphere plugin.

The requirements of this are Pure’s vSphere remote plugin 5.1.0 and Purity™ 6.2.6 or higher for PiT revert and for PiT VM undelete with a vVol VM that has had its FlashArray™ volumes eradicated from the FlashArray itself. If you’re undeleting a vVol VM that has not been eradicated yet, that functionality is present for Purity versions 6.1 and lower.

For PiT VM revert, you will also need to make sure that you have snapshots of all of the volumes associated with the vVol VM except swap- at least one data volume and one configuration volume.

For VM undelete before the volumes have been eradicated, you will need a snapshot of the vVol VM’s configuration volume.

For VM undelete after the vVol-backed VM has been eradicated, you’ll need a FlashArray protection group snapshot of all the VM’s data volumes, managed snapshots and configuration volumes.

Rather than rehash what my teammate Alex Carver has put a lot of work into, I’m just going to link to the KB and videos he created:

Download the new plugin (part of Pure’s OVA), read the release notes and test out vVol PiT recovery today! Like a lot of things, it’s better to have some understanding of what’s happening and why before needing something that might be part of your recovery process. Please note that you can also upgrade in-place from 5.0.0 to 5.1.0 (and future remote plugin releases) by following this guide.

vCenter Storage Provider “Refresh certificate” Functionality Restored

This will be a short blog, partially because my teammate Alex Carver already wrote a great blog that covers one workaround for this button not working that uses vCenter’s MOB.

If you have been using self-signed certificates in your vVols environment since vCenter 6.7 and updated to vCenter 7.0, you might have noticed something frustrating when trying to refresh those certificates manually: the button was greyed out! If you were like me, you were probably wondering why this useful functionality was removed and thought maybe it was for security reasons; your concerns might have been validated when searching VMware’s KB system and finding this KB that read like it was functionality that was removed on purpose (recently updated to reflect the current situation better).

Turns out my guess was wrong and that KB was a little misleading. VMware has brought this button’s functionality back in vCenter 7.0U3d and higher. You might say to yourself “that’s great Nelson, but I don’t upgrade my production vCenter whenever a new vCenter version comes out”. If you want a simpler workflow than re-creating the storage providers before you upgrade to newer versions of vCenter when the certificates expire eventually, Alex Carver has the method for you which uses vCenter’s MOB to refresh the storage providers without re-creating them.

Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading “Documentation Update, Best Practices and vRealize”

VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS

This is part 1 of this 7 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this. Let’s start with what an unresolved VMFS is and how to mount it.

The series being:

  1. Mounting an unresolved VMFS
  2. Why not force mount?
  3. Why might a VMFS resignature operation fail?
  4. How to correlate a VMFS and a FlashArray volume
  5. How to snapshot a VMFS on the FlashArray
  6. How to mount a VMFS FlashArray snapshot
  7. Restoring a single VM from a FlashArray snapshot

Continue reading “VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS”

Scripting ESI PowerShell cmdlets and VMware

EMC offers a variety of tools to manage/enhance your virtual or physical environments–some free, some licensed. In most cases when you think of EMC tools for VMware one conjures up the free Virtual Storage Integrator which is more commonly referred to as VSI.

VSI is a great tool and continues to be improved through each version and allows you to provision storage, manage pathing, configure SRM etc. The one thing it does not have is a way to automate these tasks through an API or CLI. This is where another product comes in–one that many do not associate with VMware. The EMC Storage Integrator (ESI) is a lot of times seen as the Microsoft version of VSI–but that isn’t really true at all. While it might have started out that way and does indeed support Hyper-V and has a ton of Microsoft-specific features it is really the heterogeneous storage integrator. Importantly it  has a very handy and powerful feature–PowerShell cmdlets.

Continue reading “Scripting ESI PowerShell cmdlets and VMware”