ActiveDR with NFS Datastores – Production Failover and Failback (Part 4)

Hello- this is part 4 in the series of blogs on ActiveDR + NFS datastores. In part 3, I covered how to configure vSphere for a test failover and then how to perform a test failover. In this blog I’ll be covering how to perform a failover and failback.

FlashArray Failure – ActiveDR Failover

What happens if an array fails? I’m going to forcefully stop Purity on both controllers of the source FlashArray (flasharray-x50-1) to simulate this situation. In this case, the workflow is the same as during the failover test except disconnecting the networking from the VMs you are about to power on. It is probably not a good idea to disconnect that so in general, you’ll want to leave these VMs as-is for this scenario. The requirements of your environment might require something else here. So you’ll promote the surviving array and power on the VMs based on the last-replicated state of the VMs. 

If this is a test you are doing in a test/proof of concept deployment, to replicate what I’m doing, simply unplug the power cables on the FlashArray. Please do not pull power on your production FlashArrays :-).

Let’s promote the pod on the target FlashArray. In the FlashArray GUI of the target, left click on (1) Protection, left click on (2) ActiveDR, left click the (3) ellipses then left click (4) Promote Local Pod…

Continue reading “ActiveDR with NFS Datastores – Production Failover and Failback (Part 4)”

ActiveDR with NFS Datastores – vSphere Configuration and Test Failover (Part 3)

Hello- this is part 3 in the series of blogs on ActiveDR + NFS datastores. In part 2, I covered how to connect ActiveDR to an NFS file system that’s backing an NFS datastore. In this blog, I’ll be covering how to connect the target NFS export in vSphere and how to run a test failover. The reason for covering test failovers before production failovers and failbacks is that I strongly recommend performing or scheduling a test failover immediately after configuring any disaster recovery solution. It is possible to not have the necessary requirements for a failover when it is critical that the failover happens quickly; testing failing over your environment before needing it in a production down scenario will reduce or eliminate this possible pain.

For the purposes of this blog, I am using Pure Storage’s remote vSphere plugin. In general, I strongly recommend installing and using this plugin to manage your FlashArray(s) more easily from the vSphere GUI. Additionally, I’ve made a demo video that covers the steps covered here.

This environment already has a mounted NFS file system from the source FlashArray. The steps to mount the NFS file system from the source array are the same as the steps for the target array except you won’t have to promote the pod on the source array.

When you perform failovers, do test failovers or are cleaning up your objects from these operations you’ll want to ensure that you follow the steps outlined here.

Continue reading “ActiveDR with NFS Datastores – vSphere Configuration and Test Failover (Part 3)”

ActiveDR with NFS Datastores – FlashArray Configuration (Part 2)

Hello- this is part 2 in the series of blogs on ActiveDR + NFS datastores. In part 1, I introduced the two technologies to give you a background on them. In this blog I’ll be covering how to connect ActiveDR to an NFS file system that’s backing an NFS datastore.

For the purposes of this blog, I am using Pure Storage’s remote vSphere plugin. In general, I strongly recommend installing and using this plugin to manage your FlashArray(s) more easily from the vSphere GUI. Additionally, I’ve made a demo video that covers the steps covered here as well as the failover steps covered in part 3.

The first step is to establish an ActiveDR replication relationship between two arrays. While ActiveDR on block is continuous and can replicate as often as 1 second, ActiveDR on File currently has a minimum replication interval of 5 minutes.

On FlashArray, a pod is a management container containing a group of volumes that can be stretched or linked between two FlashArrays (page 132). ActiveDR uses pods to manage replication between arrays. First, we’ll create a pod on the source FlashArray by left clicking (1) Storage, then left clicking (2) Pods then left clicking the (3) + to create a new pod. Give it a (4) Name then left click (5) Create.

Continue reading “ActiveDR with NFS Datastores – FlashArray Configuration (Part 2)”

vSphere Plugin 5.3.2 Release – Updated VMFS Snapshot Recovery Wizard with CBT enabled VM(s)

Hello- with the release of Pure Storage’s vSphere plugin 5.3.2, there is an included fix that works around a known VMware issue with CBT VMs on VMFS datastores. Here is the release note item on the vSphere plugin:

“TMAN-18446: VMFS PiT and environmental recovery would fail when CBT (changed block tracking) was enabled on the VM. This is a known VMware issue. In order to workaround this issue, the VMFS PiT recovery workflow will now present options for recovery when recovering a CBT enabled VM.”

What changes were made to accomplish this? There is an additional GUI step when you are recovering from snapshot on VM(s) that have CBT enabled. You’ll go through the normal steps to recover from a FlashArray snapshot of the VMFS VM(s). Follow the steps under Recover a VMFS VM from FlashArray snapshot.

Continue reading “vSphere Plugin 5.3.2 Release – Updated VMFS Snapshot Recovery Wizard with CBT enabled VM(s)”

ActiveDR with NFS Datastores – Overview (Part 1)

I’m going to be writing a series of blogs on ActiveDR (Active Disaster Recovery) with NFS datastores over the next couple of posts. Some of the other posts I have planned in the near future are:

  • Failover scenarios where the ESXi hosts are connected to both arrays
  • Failover scenarios where the ESXi hosts are only connected to one array

In this blog I will do an introduction to the technologies and I will give some high level information on how you might want to use them together.

Replication Options on FlashArray

This post will be covering FlashArray specific replication techniques that you may or may not be familiar with. If it is the latter, my colleague Cody Hosterman has a great primer on our technologies that might be worth a read for you.

Continue reading “ActiveDR with NFS Datastores – Overview (Part 1)”

NVMe-oF Features and Fixes in Pure Storage’s vSphere Plugin Versions

It can be difficult to understand the work that has gone into Pure Storage’s vSphere plugin if you’re not digesting the release notes for every release. Because NVMe-oF is going to become more and more relevant I think it’s worth highlighting some recent improvements we’ve made around NVMe-oF in the vSphere plugin. I’ll mostly be referencing the vSphere plugin release notes in this blog. I strongly recommend installing the vSphere plugin for all of your vCenter + FlashArray needs but it is a requirement of following along with the new features of the plugin later.

The first update that involves NVMe-oF datastores was back in April of 2020 and was version 4.3.0. We added support for identification of NVMe-oF datastores. A good first step!

Continue reading “NVMe-oF Features and Fixes in Pure Storage’s vSphere Plugin Versions”

Why Do I Recommend Automatic Directory Management in FA File with NFS Datastores?

Hello- Nelson Elam here. I wanted to go over the reasons why I think you should enable automatic directory management (autodir) if you are planning to use NFS datastores on FA File. A quick note before we get started- autodir is not restricted to ESXi hosts but ESXi hosts will be the focus of this blog.

What is autodir? Autodir is a way for FlashArray to reflect the current directory structure on an NFS datastore that’s managed by a connected host- a managed directory. What does this mean for ESXi? Whenever a VM gets created on an NFS datastore, a new directory (folder) gets created for the VM on the datastore. When a VM gets deleted from disk, the directory gets destroyed. Note that directories you create or destroy manually on an NFS datastore in vCenter get reflected in FlashArray as well. Simple enough!

FlashArray directory structure
vCenter datastore directory structure

If you’ve read the FA File launch blogs or have seen some of the webinars we’ve done about FA File or NFS datastores, you’ve likely seen or heard us talk about VM granular management being part of FA File. Autodir enables VM granular management. Let’s dive into VM granular management in the context of NFS datastores.

With autodir enabled, these changes are reflected on FlashArray and enable FlashArray administrators to be able to see the current state of the NFS file system from a directory perspective.

Want to figure out why the data reduction ratio of a file system dropped so significantly? Now you can see that at a per-VM basis on FlashArray.

Want to see which VMs are spiking in load at inopportune times? You can use the FlashArray GUI to help figure that out. Worth mentioning this info is more easily consumed in Pure1 when using the VM analytics collector.

Want to have a special snapshot schedule for a certain group of VMs on a FlashArray-backed NFS datastore? With autodir, you can create snapshot policies and apply them to specific directories, allowing you to get around having to snapshot an entire NFS datastore like it’s a VMFS datastore. You can still snapshot the entire NFS file system if you want! Autodir enables you to have other options.

Your mission critical VMs likely have more complex snapshot retention and frequency requirements than your test VMs. With autodir, you can also apply multiple snapshot policies to the same directory (VM).

That’s sounds great Nelson, but surely autodir isn’t a good option for every NFS datastore on FlashArray. What are the reasons you wouldn’t want to enable autodir?

The main circumstance where autodir doesn’t make sense is if the scale limits of autodir are less than the directory count in your NFS datastore. Those can be found in this KB under “Managed Directories per array“.

If you want to see a demo of how autodir is configured on FlashArray, this video goes over it.

If you want to get detailed written instructions for how to configure autodir on FlashArray, this KB article is a good resource.

Features I Use Regularly in Pure’s vSphere Plugin

Today I want to tell to you about what I use the vSphere plugin for regularly in my lab to hopefully help you get more value out of your existing Pure array and tools. The assumption of this guide is that you already have the vSphere plugin installed (follow this guide if you don’t currently have it installed or would like to upgrade to a more feature-rich remote plugin version). Our vSphere plugin release notes KB covers the differences between versions. If you aren’t sure what version you want, use the latest version.

Why should you care about the vSphere plugin and why would I highlight these workflows for you? Pure’s vSphere plugin can save you a significant amount of time in the configuration/management of your vSphere+FlashArray environment. It can also greatly reduce the barriers to success in your projects by reducing the steps required of the administrator for successful completion of a workflow. Additionally, you might currently be using the vSphere plugin for a couple of workflows but didn’t realize all of the great work our engineers have put into making your life easier.

I am planning to write more blogs on the vSphere plugin and the next one I plan to write is on the highest value features that exist in current vSphere plugin versions.

Create and Manage FlashArray Hosts and Host Group Objects

If you’re currently a Pure customer, you have likely managed your host and host group objects directly from the array. Did you know you can also do this from the vSphere plugin without having to copy over WWNs/IPs manually? (1) Right-click on the ESXi cluster you want to create/manage a host or host group object on, (2) hover over Pure Storage, then (3) left-click on Add/Update Host Group.

In this menu, there are currently Fibre Channel and iSCSI protocol configuration options. We are currently exploring options here for NVMe-oF configuration; stay tuned by following this KB. You can also check a box to configure your ESXi hosts for Pure’s best practices with iSCSI, making it so you don’t have to manually configure new iSCSI ESXi hosts.

FlashArray VMFS Datastore and Volume Management (Creation and Deletion)

There are a lot of options for VMFS volume management in the plugin. I’ll only cover the basics: creation and deletion.

When you use the plugin for datastore creation, the plugin will create the appropriate datastore in vSphere, the volume on the FlashArray, and it will connect the volume to the appropriate host(s) and host group objects on the FlashArray. (1) Right-click on the pertinent cluster or host object in vSphere, (2) hover over Pure Storage and finally (3) left-click on Create Datastore. This will bring up a wizard with a lot of options that I won’t cover here, but the end result will be a datastore that has a FlashArray volume backing it.

The great thing about deleting a datastore from the plugin is that there are no additional steps required on the array to clean up the objects. This is the most satisfying workflow for me personally because cleanup in a lab can feel like it’s not a good use of time until I’ve got hundreds of objects worth cleaning up. This workflow enables me to quickly clean up every time after I’ve completed testing instead of letting this work pile up.

(1) Right-click the datastore you want to delete, (2) hover over Pure Storage and (3) left-click on Destroy Datastore. After the confirmation prompt, the FlashArray volume backing that datastore will be destroyed and is pending eradication for whatever that value is configured on the FlashArray (default 24 hours, configurable up to 30 days with SafeMode). That’s it!

FlashArray Snapshot Creation

One of the benefits of FlashArray is its portable and lightweight snapshots. The good news is that you can create these directly from vSphere without having to log into the FlashArray. It’s worth mentioning that although the snapshot recovery workflows built into the vSphere plugin (vVols and VMFS) are far more powerful and useful when you really need them, I’m covering what I use regularly and I rarely have to recover from snapshots in my lab. I try to take snapshots every time I make a major change to my environment in case I need to quickly roll-back.

There are two separate workflows for snapshot creation: one for VMFS and one for vVols. The granularity advantage with vVols over VMFS is very clear here- with VMFS, you are taking snapshots of the entire VMFS datastore, no matter how many VMs or disks are attached to those VMs. With vVols, you only have to snapshot the volumes you need to, as granular as a single disk attached to a single VM.

With VMFS, (1) right click on the datastore, (2) hover over Pure Storage and (3) left click on Create Snapshot.

For a vVols backed disk, from the Virtual Machine Configure tab, navigate to the Pure Storage – Virtual Volumes pane, (1) select the disk you would like to snapshot and (2) click Create Snapshot.

A prompt will pop up to add a suffix to the snapshot if you’d like; click on create and you’ve got your FlashArray snapshot of a vVols backed disk created!

Stay tuned for a blog on the vSphere plugin features you might not know about that, like the above, can save you a significant amount of time and effort.

Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF

Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.

Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.

Create the Volume

From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.

In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.

Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.

On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.

Connect The Volume To the Appropriate Host Group

Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.

Next, click the three vertical dots (I think this is called a hamburger) and select Connect.

For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.

Create The NVMe-oF Datastore

Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.

(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.

Select (1) VMFS (currently vVols is unsupported by VMware with NVMe-oF) and click (2) Next.

Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.

Select (1) VMFS 6 (who uses 5 anymore anyways?!) and click (2) Next.

Click (1) Next.

Review the details and click (1) Finish.

Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.

Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)

Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.

Select (1) Change storage only from the window that pops up and click (2) Next.

Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.

Finally, verify the details of the migration and click (1) Finish.

And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.

Pure Storage vSphere Remote Plugin™ 5.1.0 launch: vVol Point-in-Time Recovery

We are excited to announce the launch of the latest version of Pure Storage’s remote vSphere plugin, 5.1.0. It includes a number of bug fixes PLUS a highly sought after feature: vVols VM point-in-time (PiT) recovery!

Why am I excited about this feature?

With vVol PiT VM recovery, you can now easily recover an entire VM that was accidentally deleted (and eradicated) or you can restore the state of a VM back to a point in time that you took a snapshot from vCenter directly while using Pure’s vSphere plugin.

The requirements of this are Pure’s vSphere remote plugin 5.1.0 and Purity™ 6.2.6 or higher for PiT revert and for PiT VM undelete with a vVol VM that has had its FlashArray™ volumes eradicated from the FlashArray itself. If you’re undeleting a vVol VM that has not been eradicated yet, that functionality is present for Purity versions 6.1 and lower.

For PiT VM revert, you will also need to make sure that you have snapshots of all of the volumes associated with the vVol VM except swap- at least one data volume and one configuration volume.

For VM undelete before the volumes have been eradicated, you will need a snapshot of the vVol VM’s configuration volume.

For VM undelete after the vVol-backed VM has been eradicated, you’ll need a FlashArray protection group snapshot of all the VM’s data volumes, managed snapshots and configuration volumes.

Rather than rehash what my teammate Alex Carver has put a lot of work into, I’m just going to link to the KB and videos he created:

Download the new plugin (part of Pure’s OVA), read the release notes and test out vVol PiT recovery today! Like a lot of things, it’s better to have some understanding of what’s happening and why before needing something that might be part of your recovery process. Please note that you can also upgrade in-place from 5.0.0 to 5.1.0 (and future remote plugin releases) by following this guide.