Hello- this is part 2 in the series of blogs on ActiveDR + NFS datastores. In part 1, I introduced the two technologies to give you a background on them. In this blog I’ll be covering how to connect ActiveDR to an NFS file system that’s backing an NFS datastore.
For the purposes of this blog, I am using Pure Storage’s remote vSphere plugin. In general, I strongly recommend installing and using this plugin to manage your FlashArray(s) more easily from the vSphere GUI. Additionally, I’ve made a demo video that covers the steps covered here as well as the failover steps covered in part 3.
The first step is to establish an ActiveDR replication relationship between two arrays. While ActiveDR on block is continuous and can replicate as often as 1 second, ActiveDR on File currently has a minimum replication interval of 5 minutes.
On FlashArray, a pod is a management container containing a group of volumes that can be stretched or linked between two FlashArrays (page 132). ActiveDR uses pods to manage replication between arrays. First, we’ll create a pod on the source FlashArray by left clicking (1) Storage, then left clicking (2) Pods then left clicking the (3) + to create a new pod. Give it a (4) Name then left click (5) Create.
With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.
ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.
This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.
Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.
Also in that release are a few more cmdlets concerning storage policy creation, editing, and assignment. They were built to make the process easier–the original cmdlets and their use is certainly an option–and for very specific things you might want to do they might be necessary, but the vast majority of common operations can be more easily achieved with these.
As always, to install run:
Or to upgrade:
These modules are open source, so if you just want to use my code or open an RFE or issue go here:
I have already posted about ActiveDR briefly here:
I wanted to go into more detail on ActiveDR (and more) in a “What’s New” series. One of the flagship features of the Purity 6.0 release is what we call ActiveDR. ActiveDR is a continuous replication feature–meaning it sends the new data over to the secondary array as quickly as it can–it does not wait for an interval to replicate.
For the TL;DR, here is a video tech preview demo of the upcoming SRM integration as well as setup of ActiveDR itself
But ActiveDR is much more than just data replication is protects your storage environment. Let me explain what that means.
This post I will talk about using PowerCLI to run a test failover for VVol-based virtual machines. One of the many nice things about VVols is that in the VASA 3.0 API this process is largely automated for you. The SRM-like workflow of a test failover is included–so the amount of storage-related PowerShell you have to manually write is fairly minimal.
So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.
An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.
Somewhat surprisingly I have been getting a fair amount of questions in the past few months concerning VMware vCenter Site Recovery Manager and Raw Device Mappings (RDMs) and using this with Pure Storage. Common question is whether or not we support this (we do) but more commonly it is about how it works. There is a bit of a misunderstanding on how they differ or do not differ from VMFS management in SRM. So figured I would put a post out to explain this. Old topic somewhat, but worth reviewing for those newer SRM customers. Plus, I haven’t found a whole lot of on-point posts anywhere, so why not?