Native Pure Storage FlashArray™ File Replication – Purity 6.3


With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.

ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.

This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.

Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.

There are some differences between file and block workloads when it comes to ActiveDR replication. You can read more in the ActiveDR for File Services section of this Pure KB.

Managing vVol Storage Policies with PowerShell

I just posted about some new cmdlets here:

Also in that release are a few more cmdlets concerning storage policy creation, editing, and assignment. They were built to make the process easier–the original cmdlets and their use is certainly an option–and for very specific things you might want to do they might be necessary, but the vast majority of common operations can be more easily achieved with these.

As always, to install run:

Install-Module PureStorage.FlashArray.VMware

Or to upgrade:

Update-Module PureStorage.FlashArray.VMware

These modules are open source, so if you just want to use my code or open an RFE or issue go here:

https://github.com/PureStorage-OpenConnect/PureStorage.FlashArray.VMware/

For detailed help on a cmdlet, run Get-Help

Continue reading “Managing vVol Storage Policies with PowerShell”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”

vRealize Orchestrator VVol Workflow Package

Ok finally! I had this finished awhile ago, but I wrote it using our version 2.0 plugin–so I couldn’t post it until the plugin was certified by VMware. That plugin version is now certified and posted on the VMware Solution Exchange (see my post here).

Moving forward, we will likely be posting new workflows in various packages (working on an ActiveCluster one now), instead of including them directly in our plugin. This will make it easier to update them and add to them, without also having to generate an entire new plugin version.

So first, download and install the v2 FlashArray plugin for vRO and then install my workflow package for VVol on the VMware Solutions Exchange:

https://marketplace.vmware.com/vsx/solutions/flasharray-vvol-workflow-package-for-vro-1-0?ref=search 

Continue reading “vRealize Orchestrator VVol Workflow Package”

Semi-transparent failover with VMFS and Active/Passive Replication

So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.

An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.

Continue reading “Semi-transparent failover with VMFS and Active/Passive Replication”

Understanding the FlashArray Replication Connection Key

A question came up in a today at work that I answered and I thought it might be a good topic for a quick blog post:

How do you change your connection key for FlashArray replication?

The question misunderstands what the connection key actually is, so let me explain.

When you connect one FlashArray to another, you need three pieces of information:

  1. The FQDN or IP for the management address of the remote array
  2. The FQDN or IP for the replication address of the remote array
  3. A connection key

Continue reading “Understanding the FlashArray Replication Connection Key”

Site Recovery Manager 6 and Storage DRS Tagging: Part I–The Basics

VMware vCenter Site Recovery Manager 6.0 was mostly a compatibility release–getting it to work right with vCenter 6.0 essentially. That being said, there were a few new features (and some nice tweaks in the GUI) included in the release. One of the new features that sparked my interest was SRM and Storage DRS compatibility enhancements.

Ben Meadowcroft a VMware PM who works on amongst other things, SRM, blogged about this new feature here. Find the VMware KB here.

srmspash

Ben covers most of the history of this in his post so I will skip over that. Let’s take a look though a little closer at this functionality. So to overview there are three tags that SRM introduces to a datastore:

  • SRM-com.vmware.vcDr:::status (indicates that the datastore is replicated)
  • SRM-com.vmware.vcDr:::consistencyGroup (indicates what CG the datastore belongs to, if any)
  • SRM-com.vmware.vcDr:::protectionGroup (indicates what PG the datastore belongs to, if any)

Replication status is assigned as soon as SRM (and it’s respective Storage Replication Adapter) discovers it to be replicated through a Device Discovery operation. Upon this discovery a consistency group tag is also assigned. If the volume is not advertised by the SRA as being in a consistency group a unique one will be created for that volume–basically indicating it is in its own consistency group.

nopg

npogsrm

A protection tag is not assigned until the volume is actually added to a protection group. Once the datastore is assigned to a protection group it will receive the tag (remember a volume can only be in one PG and SRM only supports being in one CG so there will always only be one to assign).

yespg

So what do these tags do? Well Storage DRS will note these tags and not make any automatic moves if a Storage vMotion would violate any of them, this means it will not move from one datastore to another if:

1) Source datastore is replicated and target is not

tononreplicated

2) Source datastore is NOT replicated and target is

toreplicatedfromnon

3) Source datastore is in a different consistency group than the target

differentCG

4) Source datastore is replicated AND in a protection group but target is replicated but NOT in an protection group

notinPG

Basically Storage DRS will not move a VM from one datastore to another if it deems it to cause a change in the configuration of the protection group or consistency of a virtual machine.

So automatic Storage DRS will never make these moves. It may suggest them if it cannot find a better option, but it will never make a move that will violate these rules. If for some reason you want this to occur you can always override the warning and execute the operation.

overridesdrs

Let’s take a look now at the relevant configurable behavior in SRM.

There are four options:

Setting Name Description and Default Value
storage.enableSdrsStandardTagCategoryCreation This creates the three tag categories in vCenter for you.
storage.enableSdrsTagging This actually applies the tags to the datastores when discovered etc.
storage.enableSdrsTaggingRepair This allows SRM to fix datastore tag when something has changed (PG/CG membership changes for instance).
storage.sdrsTaggingPollInterval How often SRM checks tags to make sure they are accurate.

srmsdrsoptions

All of these options are enabled by default, well, kinda, the last one is just set to 50 seconds.

So like the table says the enableSdrsStandardTagCategoryCreation option is pretty straight forward. Creates the three categories. You can, of course, create them yourself if you choose to, not sure why you would though with the exception of the reason stated in the option description:

“In Federated SSO setups, this flag should be disabled and the tags and tag categories should be manually created.”

When enableSdrsTagging is enabled, SRM will place the correct tags at the appropriate times. So when a new device is discovered or its protection group membership changes.

The option enableSdrsTaggingRepair is a little more to think about. New tags will still be placed on datastores, replicated/cg tags during device discovery, pg tags upon adding it to a new or different pg. But it will not fix or remove them, if you remove it from a PG or delete the PG, the tag will remain. If you delete the SRM provided tag and replace it with you own, it will not fix it. Though if you add it to a new PG it will remove an old one if it exists and then give it the correct one. But it won’t ever do that unless you make that PG change.

A note about the repair functionality. If you decide to delete a SRM-provided tag and make you own, it will not last long if this feature is enabled. SRM will right things quite quickly (50 seconds or less). So if you want more control over this tagging for SRM-related devices, disabling this is an option. Of course disabling this can easily lead to stale information in the tags, so do so at your own risk.

In general, I think this is a great enhancement. I would like to see more granular control from the SRM side of things (enable/disable CG auto-tagging when a CG doesn’t exist for that device for instance. This also should have a play in non-SRM environments, it’s just a bit more work because you have to do the tagging yourself.

In Part II, I will take a look at how this works with the FlashArray SRA and what’s involved in that.

FlashRecover replication on the Pure Storage FlashArray

Last year Pure Storage introduced built-in replication on the FlashArray 400 series in our Purity Operating Environment version 4.0. Our replication offers a variety of benefits–they center around two things. First it is completely free. There is no license charge for replication itself or by capacity. If you need to have is two FlashArrays and a TCP/IP network between the two of them to replicate over. No additional hardware to buy for the array or license packages required (all of our software is always free). Secondly, it is very easy to use–from a green field array to replicating volumes takes maybe five minutes–in reality probably far less than that. So I wanted to take some time to review how our replication is setup and how it works. I went over replication briefly when we released Purity 4.0, but I think it is time for a closer look.

replication

Continue reading “FlashRecover replication on the Pure Storage FlashArray”

Purity 4.0 Release: New hardware models, replication and more!

Ah my first official post during my tenure at Pure and it couldn’t have happened at a better time! Just in time for the Purity 4.0 release which we just announced today. While there are plenty of under-the-cover enhancements I am going to focus on the two biggest parts of the release: new hardware and replication. There are other features such as  for example hardware security token locking but I am not going to go into those in this post. So first let’s talk about the advancement in hardware!

newhardware

Continue reading “Purity 4.0 Release: New hardware models, replication and more!”