ActiveDR with NFS Datastores – Production Failover and Failback (Part 4)

Hello- this is part 4 in the series of blogs on ActiveDR + NFS datastores. In part 3, I covered how to configure vSphere for a test failover and then how to perform a test failover. In this blog I’ll be covering how to perform a failover and failback.

FlashArray Failure – ActiveDR Failover

What happens if an array fails? I’m going to forcefully stop Purity on both controllers of the source FlashArray (flasharray-x50-1) to simulate this situation. In this case, the workflow is the same as during the failover test except disconnecting the networking from the VMs you are about to power on. It is probably not a good idea to disconnect that so in general, you’ll want to leave these VMs as-is for this scenario. The requirements of your environment might require something else here. So you’ll promote the surviving array and power on the VMs based on the last-replicated state of the VMs. 

If this is a test you are doing in a test/proof of concept deployment, to replicate what I’m doing, simply unplug the power cables on the FlashArray. Please do not pull power on your production FlashArrays :-). Here’s a table of articles in this series:

ActiveDR with NFS Datastores TopicSpecific Topics (NFSD = NFS Datastores)
OverviewWhat’s ActiveDR? What are NFSD?
FlashArray ConfigurationFlashArray NFSD and ActiveDR config
vSphere Configuration and Test FailovervSphere configuration for ActiveDR; test failover
Production Failover and FailbackActiveDR failover and failback in vSphere
Continue reading “ActiveDR with NFS Datastores – Production Failover and Failback (Part 4)”

ActiveDR with NFS Datastores – vSphere Configuration and Test Failover (Part 3)

Hello- this is part 3 in the series of blogs on ActiveDR + NFS datastores. In part 2, I covered how to connect ActiveDR to an NFS file system that’s backing an NFS datastore. In this blog, I’ll be covering how to connect the target NFS export in vSphere and how to run a test failover. The reason for covering test failovers before production failovers and failbacks is that I strongly recommend performing or scheduling a test failover immediately after configuring any disaster recovery solution. It is possible to not have the necessary requirements for a failover when it is critical that the failover happens quickly; testing failing over your environment before needing it in a production down scenario will reduce or eliminate this possible pain.

For the purposes of this blog, I am using Pure Storage’s remote vSphere plugin. In general, I strongly recommend installing and using this plugin to manage your FlashArray(s) more easily from the vSphere GUI. Additionally, I’ve made a demo video that covers the steps covered here. Here’s a table of articles in this series:

ActiveDR with NFS Datastores TopicSpecific Topics (NFSD = NFS Datastores)
OverviewWhat’s ActiveDR? What are NFSD?
FlashArray ConfigurationFlashArray NFSD and ActiveDR config
vSphere Configuration and Test FailovervSphere configuration for ActiveDR; test failover
Production Failover and FailbackActiveDR failover and failback in vSphere

This environment already has a mounted NFS file system from the source FlashArray. The steps to mount the NFS file system from the source array are the same as the steps for the target array except you won’t have to promote the pod on the source array.

When you perform failovers, do test failovers or are cleaning up your objects from these operations you’ll want to ensure that you follow the steps outlined here.

Continue reading “ActiveDR with NFS Datastores – vSphere Configuration and Test Failover (Part 3)”

ActiveDR with NFS Datastores – FlashArray Configuration (Part 2)

Hello- this is part 2 in the series of blogs on ActiveDR + NFS datastores. In part 1, I introduced the two technologies to give you a background on them. In this blog I’ll be covering how to connect ActiveDR to an NFS file system that’s backing an NFS datastore.

For the purposes of this blog, I am using Pure Storage’s remote vSphere plugin. In general, I strongly recommend installing and using this plugin to manage your FlashArray(s) more easily from the vSphere GUI. Additionally, I’ve made a demo video that covers the steps covered here as well as the failover steps covered in part 3. Here’s a table of articles in this series:

ActiveDR with NFS Datastores TopicSpecific Topics (NFSD = NFS Datastores)
OverviewWhat’s ActiveDR? What are NFSD?
FlashArray ConfigurationFlashArray NFSD and ActiveDR config
vSphere Configuration and Test FailovervSphere configuration for ActiveDR; test failover
Production Failover and FailbackActiveDR failover and failback in vSphere
Continue reading “ActiveDR with NFS Datastores – FlashArray Configuration (Part 2)”

ActiveDR with NFS Datastores – Overview (Part 1)

I’m going to be writing a series of blogs on ActiveDR (Active Disaster Recovery) with NFS datastores over the next couple of posts. Some of the other posts I have planned in the near future are:

  • Failover scenarios where the ESXi hosts are connected to both arrays
  • Failover scenarios where the ESXi hosts are only connected to one array

In this blog I will do an introduction to the technologies and I will give some high level information on how you might want to use them together.

Replication Options on FlashArray

This post will be covering FlashArray specific replication techniques that you may or may not be familiar with. If it is the latter, my colleague Cody Hosterman has a great primer on our technologies that might be worth a read for you. Here’s a table of articles in this series:

ActiveDR with NFS Datastores TopicSpecific Topics (NFSD = NFS Datastores)
OverviewWhat’s ActiveDR? What are NFSD?
FlashArray ConfigurationFlashArray NFSD and ActiveDR config
vSphere Configuration and Test FailovervSphere configuration for ActiveDR; test failover
Production Failover and FailbackActiveDR failover and failback in vSphere
Continue reading “ActiveDR with NFS Datastores – Overview (Part 1)”

Native Pure Storage FlashArray™ File Replication – Purity 6.3


With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.

ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.

This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.

Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.

There are some differences between file and block workloads when it comes to ActiveDR replication. You can read more in the ActiveDR for File Services section of this Pure KB.

SRA 4.0 Released! ActiveDR support

We just released our latest version of our Storage Replication Adapter, version 4.0 for VMware Site Recovery Manager. There are a lot of enhancements in this release and improvements–if you are on 3.1 (or certainly earlier) I recommend an upgrade when you get a chance.

For all the need-to-know information (release notes, user guide, videos, download link, etc.) see here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/Quick_Reference_by_VMware_Product_and_Integration/Site_Recovery_Manager_Quick_Reference

Continue reading “SRA 4.0 Released! ActiveDR support”

Pure Storage Plugin v4.4.0 for the vSphere Client

A lot going on in the Pure world right now, specifically:

https://blog.purestorage.com/portworx-joins-pure/

And I have a lot to say on that (stay tuned!) but for now, let’s focus on some new releases.

We have released a few new plugins:

  • vSphere Plugin 4.4.0
  • Storage Replication Adapter 4.0
  • vROPs Management Pack 3.0.2
  • vRealize Orchestrator 3.5.0

Let’s walk through these one by one. In this post, I will go over the vSphere Plugin.

For release notes, installation instructions, etc:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/Quick_Reference_by_VMware_Product_and_Integration/Pure_Storage_Plugin_for_the_vSphere_Client_Quick_Reference

Quite a few bug fixes in this release too.

vSphere Plugin 4.4.0

This release adds support for a few things:

ActiveDR

Datastores can now be provisioned to ActiveDR pods via the plugin:

There is a new tab “Continuous” which is where you will find ActiveDR-enabled pods. The fields show the source pod (where the volume would go), the target pod (where the volume will be replicated to), the source and target arrays (which currently own those pods), the replication direction, and the “lag”. The lag is how far behind the target pod is from the source pod.

When you click on a datastore, you will see a few more pieces of information in the FlashArray summary panel:

This will show the ActiveDR information if the volume of course is in an enabled ActiveDR pair. The plugin also supports all of the usual features with ActiveDR datastores: resize, rename, QoS, snapshot, refresh from snapshot, copy from snapshot.

Demo of provisioning and ActiveDR datastore:

vVol Snapshots

You can create a snapshot of a VM using the standard VMware snapshot tool, but that snapshots every single virtual disk–which you may not want/need. We used to have the ability in the plugin to create a one-off snapshot of a vVol, but removed it due to some early issues that have since been resolved. This feature has been reintroduced:

Now you can click on a vVol-type VM and navigate to the Configure tab and click on Pure Storage – > Virtual Volumes.

You can select a single vVol disk and click Create Snapshot.

This will create a new single snapshot of the volume that is that vVol. You can then restore from it, or copy from it with the other tools.

You can also do this with the home directory (config) vVol. Why would you want to snapshot this? Well because protects your virtual machine configuration. The pointer files, the VMX file, snapshot hierarchies, logs, etc. If you accidentally make a change to the VMX file that breaks your VM (or you made a lot and don’t know what you did) the restore can restore the config without having to restore the entire VM.

The other reason, is “undelete” protection. When you delete a VM, ESXi first deletes all of the files from the config vVol, then it tells the array to delete the volumes. When we delete volumes, we put the volumes in the destroyed volumes folder, then they get permanently deleted in 24 hours (by default) or manually by an admin (unless safemode is turned on and then manual eradication is not possible).

The problem here, is that if you delete a VM, we can restore the config volume itself, but VMware wiped the data from it. So it is blank. VMware does not wipe the data from the virtual disks, so those can be “undeleted” and the original data is still there. So to fully restore an undeleted VM, we need a snapshot of the config vVol. This will restore all of the files.

The ideal option here, is to assign a snapshot storage policy to the home vVol (or even more ideally all of the vVols) to have the array snapshot on a schedule:

So to do this, create a 1 hour snapshot protection group on the FlashArray:

Import the protection group into vSphere as an SPBM policy:

Select and import:

And it is now a policy:

Then assign the policy and the group to the VM (or just the VM home to protect the config).

If you don’t need frequent snapshots of the config vVol and just one will do (or whenever you want), this is what we added. You can select the VM home and click the Create Snapshot button:

Alternatively we have another place to do this. If you click on the VM summary tab and look at the FlashArray panel, there is an Undelete Protection box. If we do not see any snapshots for the config vVol, we will show a warning like below:

What this means, is that we cannot fully restore this VM if it is accidentally deleted. The data, yes. But the VM configuration, no. You can create a snapshot from here too, by clicking Snapshot now…

If it is protected, we will show the timestamp of the latest discovered snapshot:

So if you delete it:

You can restore via the plugin easily:

If the VM configuration is changing a lot–you probably want to protect via schedule. If the VM does not change a lot, then one off snapshots will work fine.

ESXi Host Personality

Also, we now set the ESXi host personality when creating new clusters:

This is important for some ActiveDR and ActiveCluster scenarios, so it is our best practice by default.

For more info on that:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/FlashArray_VMware_Best_Practices_User_Guide/bbbFlashArray_Configuration#Setting_the_FlashArray_.E2.80.9CESXi.E2.80.9D_Host_Personality

What’s New in Purity 6.0: ActiveDR

I have already posted about ActiveDR briefly here:

I wanted to go into more detail on ActiveDR (and more) in a “What’s New” series. One of the flagship features of the Purity 6.0 release is what we call ActiveDR. ActiveDR is a continuous replication feature–meaning it sends the new data over to the secondary array as quickly as it can–it does not wait for an interval to replicate.

For the TL;DR, here is a video tech preview demo of the upcoming SRM integration as well as setup of ActiveDR itself

But ActiveDR is much more than just data replication is protects your storage environment. Let me explain what that means.

Continue reading “What’s New in Purity 6.0: ActiveDR”