Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF

Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.

Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.

Create the Volume

From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.

In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.

Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.

On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.

Connect The Volume To the Appropriate Host Group

Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.

Next, click the three vertical dots (I think this is called a hamburger) and select Connect.

For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.

Create The NVMe-oF Datastore

Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.

(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.

Select (1) VMFS (currently vVols is unsupported by VMware with NVMe-oF) and click (2) Next.

Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.

Select (1) VMFS 6 (who uses 5 anymore anyways?!) and click (2) Next.

Click (1) Next.

Review the details and click (1) Finish.

Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.

Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)

Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.

Select (1) Change storage only from the window that pops up and click (2) Next.

Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.

Finally, verify the details of the migration and click (1) Finish.

And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.

Tech Preview: vCenter Site Recovery Manager with ActiveCluster

An increasingly common use case for Active-Active replication in vSphere environments is vSphere Metro Storage Cluster (vMSC) which I wrote about in this paper recently:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/002ActiveCluster_with_VMware/PDF_Guide%3A_Implementing_vSphere_Metro_Storage_Cluster_With_ActiveCluster

This overviews how a stretched vSphere cluster interacts with the active-active replication we offer on the FlashArray called ActiveCluster. Continue reading “Tech Preview: vCenter Site Recovery Manager with ActiveCluster”

Semi-transparent failover with VMFS and Active/Passive Replication

So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.

An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.

Continue reading “Semi-transparent failover with VMFS and Active/Passive Replication”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”

Pure Storage FlashArray SRA for Site Recovery Manager

I’ve have been working with VMware’s vCenter Site Recovery Manager since the tail end of the 1.x release and I have to say this is the most excited I have been about a Storage Replication Adapter release that I can remember. Since I started with Pure in late April 2014 I have been working with our development team and product management to design and shape this initial release of the Pure Storage SRA. I have to say it has been a blast–a really great team that does some really amazing work! It is now officially approved and posted on VMware’s  compatibility guide and SRA download site:

http://www.vmware.com/resources/compatibility/detail.php?productid=38264&deviceCategory=sra&details=1&partner=399

https://my.vmware.com/group/vmware/details?downloadGroup=SRM_SRA55&productId=451

srmpure

Continue reading “Pure Storage FlashArray SRA for Site Recovery Manager”

Service release SRDF SRA 5.5.1.0 for VMware Site Recovery Manager

As I wrote about in a recent post the 5.1 version of the SRDF Storage Replication Adapter was updated in a service release. Similar fixes and a few other changes have been also added to the SRDF SRA version 5.5. If you are running VMware Site Recovery Manager version 5.1 or 5.5 this is the SRA you should be using. SRM 5.0 users can only use the 5.1 SRA (while the 5.1 SRA supports SRM 5.0 and 5.1 I recommend SRM 5.1 users use the 5.5 SRA).

So what’s new in the latest SR for SRDF SRA (5.5.1.0)?

EMC_Image_C_1310593712538_srdf

Continue reading “Service release SRDF SRA 5.5.1.0 for VMware Site Recovery Manager”

Updated SRDF Storage Replication Adapter for VMware SRM 5.5 TechBook

Quick overview, the SRDF Storage Replication Adapter for VMware SRM TechBook is an in-depth implementation guide focused on how to install, configure and manager SRDF with VMware’s vCenter Site Recovery Manager product. Overview of SRDF, the involved tools and how to perform test recovery, migrations and disaster recovery failovers.

srmtechbook55

My last hurrah with this TechBook! Since I have moved on to a new (and exciting!) role within the Open Innovation Lab in the EMC Office of the CTO it is time for me to pass on the torch of the SRM TechBook.

Anyways you can find the updated TechBook here:

https://support.emc.com/docu38641_Using-the-SRDF-Adapter-for-VMware-Site-Recovery-Manager-5.5.pdf?language=en_US

Continue reading “Updated SRDF Storage Replication Adapter for VMware SRM 5.5 TechBook”

Virtual Storage Integrator: Symmetrix SRA Utilities 5.6

Late last week I posted a summary blog on the latest SRDF Storage Replication Adapter for VMware Site Recovery Manager here:

https://www.codyhosterman.com/2013/10/25/updated-srdf-storage-replication-adapter-released-for-srm-5-5/

I detailed out the new features etc for the 5.5 release and briefly mentioned the latest release of the Virtual Storage Integrator Symmetrix SRA Utilities that helps users configure the SRDF SRA. On 10/25, we posted the latest release of the SRA Utilities, version 5.6.

Version 5.6 of the SRA Utilities have been enhanced in tandem with the SRDF SRA to support the new features that the SRA has to offer. Most of these enhancements relate to the masking control functionality that is newly supported by the SRA.

isntallvsi

Continue reading “Virtual Storage Integrator: Symmetrix SRA Utilities 5.6”

Updated SRDF Storage Replication Adapter released for SRM 5.5

Today EMC posted the updated SRDF Storage Replication Adapter (SRA) 5.5 for Symmetrix VMAX arrays to their website:

https://download.emc.com/downloads/DL49914_EMCSRDFSRA_5.5.0.0.exe.exe

It will be on VMware’s site shortly:

https://my.vmware.com/web/vmware/details?downloadGroup=SRM_SRA&productId=357&rPId=4220

This adapter includes support for VMware vCenter Site Recovery Manager 5.5 (as well as “legacy” support for SRM 5.1).

Continue reading “Updated SRDF Storage Replication Adapter released for SRM 5.5”

Migrating a Raw Device Mapping with Federated Live Migration

Migrating a virtual machine that uses 100% virtual disks is a simple task due to VMware Storage vMotion but migrating a VM that uses Raw Device Mappings from one array to another is somewhat trickier. There are options to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs to be supported. Other options are host based mechanisms/in-guest mechanisms to copy the data from one device to another. That option sometimes can be complex and requires special drivers or even possibly downtime to fully finish the transition. To solve this issue for physical hosts, EMC introduced for the Symmetrix a feature called Federated Live Migration.

Federated Live Migration (FLM) allows the migration of a device from one Symmetrix array to another Symmetrix array without any downtime to the host and also does not affect the SCSI inquiry information of the original source device. Therefore, even though the device is now residing on a completely different Symmetrix array the host is none the wiser. FLM leverages Open Replicator functionality to migrate the data so it has some SAN requirements–the source array must be zoned to the target array. A FLM setup looks like the image below:

flmSAN Continue reading “Migrating a Raw Device Mapping with Federated Live Migration”