Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF

Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.

Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.

Create the Volume

From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.

In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.

Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.

On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.

Connect The Volume To the Appropriate Host Group

Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.

Next, click the three vertical dots (I think this is called a hamburger) and select Connect.

For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.

Create The NVMe-oF Datastore

Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.

(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.

Select (1) VMFS (currently vVols is unsupported by VMware with NVMe-oF) and click (2) Next.

Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.

Select (1) VMFS 6 (who uses 5 anymore anyways?!) and click (2) Next.

Click (1) Next.

Review the details and click (1) Finish.

Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.

Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)

Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.

Select (1) Change storage only from the window that pops up and click (2) Next.

Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.

Finally, verify the details of the migration and click (1) Finish.

And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.

Pure Storage’s vROps Management Pack ™ 3.2.0 – New Features and Changes

Pure Storage recently launched a new management pack for vROps that had a number of important fixes and some changes to the interface. You can download it here and find the full release notes here. What’s new?

  • Interface changes
    • Updated icons
    • Restructuring of Pure Storage objects in the Object view of vROps
  • Add Offload Snapshot capacity metric
  • Add FlashArray Software™ version property

Let’s go over the interface changes first. If you navigate to Environment -> Object Browser -> Pure Storage FlashArray -> FlashArray Resources -> PureStorage World and expand an array, the layout will look quite different than what was there before. For starters, the icons have almost all been updated to mirror what you would expect to see on a modern FlashArray Purity version (or vCenter if that is a vCenter object). We made this change to make the vROps management pack experience as close to the FlashArray experience as possible.

Additionally, we moved the structure of the objects around to be more consistent with what you’d expect from the FlashArray. No objects were removed and the same object can be listed in multiple places where it makes sense (for example, if you expand a Hosts group, you will see the pertinent volumes there as well as under the Volumes group).

Next, we’ve added the Offload Snapshot capacity metric in this version as well as a FlashArray Offload Target object. The Offload Target object is visible under Protection and you can see the current space used by that Offload Target in the badge for that object; additionally, there is a Capacity metric for this object that shows historical consumption.

Lastly, you can now retrieve the Purity version of the array directly from vROps to help plan your FlashArrays’ upgrades. This information is found by selecting a FlashArray and going to Metrics -> Properties -> Details -> Purity Version.

Native Pure Storage FlashArray™ File Replication – Purity 6.3


With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.

ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.

This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.

Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.

There are some differences between file and block workloads when it comes to ActiveDR replication. You can read more in the ActiveDR for File Services section of this Pure KB.

What’s New in vSphere 7.0 U2 Storage: Creating a File Repository on a vVol Datastore

This feature has many names. Creating a larger config vVol. Creating a sub-vVol datastore. Creating an ISO repository. Etc.

In 7.0U2, VMware added a new feature that supports creating a custom size config vVol–while this was technically possible in earlier releases, it was not supported. Also, I should note that this is not supported by all vVol vendors, so of course speak to your vendor first.

First to review what a config vVol is check out this post:

What is a Config VVol Anyways?

In short, it is a mini VMFS that gets created when you create a directory in a vVol datastore (most commonly created by creating a new VM). This defaults to 4 GB in size. Enough to store the general VM files; some logs, VMDK pointers, vmx file, and some other frivolities.

The issue though is that this was not large enough to store large things like ISOs or vib files or whatever. So if you tried to upload something to a vVol datastore folder it would fail with an out-of-space issue. And you cannot upload to the root of a vVol datastore because a vVol datastore is not a file system. So you had to use VMFS or NFS to store those objects.

This is no longer the case.

Continue reading “What’s New in vSphere 7.0 U2 Storage: Creating a File Repository on a vVol Datastore”

Pure Storage Plugin for the vSphere Client 4.5.0 Release

Howdy doody folks. Lots of releases coming down the pipe in short order and the latest is well the latest release of the Pure Storage Plugin for the vSphere Client. This may be our last release of it in this architecture (though we may have one or so more depending on things) in favor of the new preferred client-side architecture that VMware released in 6.7. Details on that here if you are curious.

Anyways, what’s new in this plugin?

The release notes are here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/Release_Notes_for_VMware_Solutions/Release_Notes%3A_Pure_Storage_Plugin_for_the_vSphere_Client#4.5.0_Release_Notes

But in short, five things:

  1. Improved protection group import wizard. This feature pulls in FlashArray protection groups and converts them into vVol storage policies. This was, rudimentary at best previously, and is now a full-blown, much more flexible wizard.
  2. Native performance charts. Previously performance charts for datastores (where we showed FlashArray performance stats in the vSphere Client) was actually an iframe we pulled from our GUI. This was a poor decision. We have re-done this entirely from the ground up and now pull the stats from the REST API and draw them natively using the Clarity UI. Furthermore, there are now way more stats shown too.
  3. Datastore connectivity management. A few releases ago we added a feature to add an existing datastore to new compute, but it wasn’t particularly flexible and it wasn’t helpful if there were connectivity issues and didn’t provide good insight into what was already connected. We now have an entirely new page that focuses on this.
  4. Host management. This has been entirely revamped. Initially host management was laser focused on one use case: connecting a cluster to a new FlashArray. But no ability to add/remove a host or make adjustments. And like above, no good insight into current configuration. The host and cluster objects now have their own page with extensive controls.
  5. vVol Datastore Summary. This shows some basic information around the vVol datastore object

First off how do you install? The easiest method is PowerShell. See details (and other options) here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/Using_the_Pure_Storage_Plugin_for_the_vSphere_Client/vSphere_Plugin_User_Guide%3A_Installing_the_vSphere_Plugin

Continue reading “Pure Storage Plugin for the vSphere Client 4.5.0 Release”

Managing vVol Storage Policies with PowerShell

I just posted about some new cmdlets here:

Also in that release are a few more cmdlets concerning storage policy creation, editing, and assignment. They were built to make the process easier–the original cmdlets and their use is certainly an option–and for very specific things you might want to do they might be necessary, but the vast majority of common operations can be more easily achieved with these.

As always, to install run:

Install-Module PureStorage.FlashArray.VMware

Or to upgrade:

Update-Module PureStorage.FlashArray.VMware

These modules are open source, so if you just want to use my code or open an RFE or issue go here:

https://github.com/PureStorage-OpenConnect/PureStorage.FlashArray.VMware/

For detailed help on a cmdlet, run Get-Help

Continue reading “Managing vVol Storage Policies with PowerShell”

New vVol Replication PowerShell Cmdlets

Happy New Year everyone! Let’s work to make 2021 a better year.

In furtherance of that goal, I have put out a few new vVol-related PowerShell cmdlets! So baby steps I guess.

The following are the new cmdlets:

Basics:

  • Get-PfaVvolStorageArray

Replication:

  • Get-PfaVvolReplicationGroup
  • Get-PfaVvolReplicationGroupPartner
  • Get-PfaVvolFaultDomain

Storage Policy Management:

  • Build-PfaVvolStoragePolicyConfig
  • Edit-PfaVvolStoragePolicy
  • Get-PfaVvolStoragePolicy
  • New-PfaVvolStoragePolicy
  • Set-PfaVvolVmStoragePolicy

Now to walk through how to use them. This post will talk about the basics and the replication cmdlets. The next post will talk about the profile cmdlets.

Continue reading “New vVol Replication PowerShell Cmdlets”

SRA 4.0 Released! ActiveDR support

We just released our latest version of our Storage Replication Adapter, version 4.0 for VMware Site Recovery Manager. There are a lot of enhancements in this release and improvements–if you are on 3.1 (or certainly earlier) I recommend an upgrade when you get a chance.

For all the need-to-know information (release notes, user guide, videos, download link, etc.) see here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/Quick_Reference_by_VMware_Product_and_Integration/Site_Recovery_Manager_Quick_Reference

Continue reading “SRA 4.0 Released! ActiveDR support”

Disk.DiskMaxIOSize and the Blue Screen of Death

While the title of this post does sound like a halfway decent Harry Potter novel, this is far more nefarious. Pure Storage, like many other vendors have a best practice around lowering the Disk.DiskMaxIOSize setting on ESXi hosts when using UEFI boot for your Windows VMs. Why? Well:

Yes not having it set in a few situations would cause BSOD. First off, why?

Continue reading “Disk.DiskMaxIOSize and the Blue Screen of Death”

What’s New in Purity 6.0: ActiveDR

I have already posted about ActiveDR briefly here:

I wanted to go into more detail on ActiveDR (and more) in a “What’s New” series. One of the flagship features of the Purity 6.0 release is what we call ActiveDR. ActiveDR is a continuous replication feature–meaning it sends the new data over to the secondary array as quickly as it can–it does not wait for an interval to replicate.

For the TL;DR, here is a video tech preview demo of the upcoming SRM integration as well as setup of ActiveDR itself

But ActiveDR is much more than just data replication is protects your storage environment. Let me explain what that means.

Continue reading “What’s New in Purity 6.0: ActiveDR”