Native Pure Storage FlashArray™ File Replication – Purity 6.3


With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.

ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.

This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.

Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.

There are some differences between file and block workloads when it comes to ActiveDR replication. You can read more in the ActiveDR for File Services section of this Pure KB.

vSphere Remote Plugin: .local vCenter Domains

Hello- Nelson Elam here! I’m a VMware Solutions Engineer at Pure Storage and wanted to make you aware of an issue we’ve seen crop up a couple of times recently with our vSphere Remote Plugin and provide a quick explanation.

If your vCenter uses a .local domain (vcenter.purestorage.local is one example), you might have seen the following 3 errors in Pure’s vSphere Remote Plugin in vCenter:

  1. In the FlashArray list page, the error “Error retrieving array list. Please try again later.” is returned.
    clipboard_eaaa133673603ef70ba1a091e33b493c7.png
  2. When trying to import arrays via Pure1, the error “Authenticate with Pure1 to use this feature” is returned despite previously successful registration with Pure1 through the plugin.
  3. When adding an array manually, a “no permissions” error is returned.

Resolution:
To resolve this, follow step 14 from the Online Deployment Procedure for the remote plugin by running this command after customizing it to your environment:
pureuser@purestorage-vmware-appliance:~$ puredns setattr --search {your .local domain} --nameservers {ip or FQDN of DNS server}

So what’s going on here? When the OVA where you deployed the Remote vSphere Plugin tries to reach out to your vCenter with a .local domain suffix, it can’t resolve the DNS address unless you’ve provided the appropriate search domain for the OVA and will return different errors depending on where you are trying to interact with it in vCenter.

Luckily this is a simple fix despite the seemingly unrelated errors that pop up. Hopefully this was helpful!

NVMe-oF Multipath Configuration for Pure Storage Datastores

Hello- my name is Nelson Elam and I’m a Solutions Engineer at Pure Storage. I am guest writing this blog for use on Cody’s website. I hope you find it helpful!

With the introduction of Purity 6.1, Pure now supports NVMe-oF via Fibre Channel, otherwise known as NVMe/FC. For VMware configurations with multipathing, there are some important considerations. Please note that these multipathing recommendations apply to both NVMe-RoCE and NVMe/FC.

Continue reading “NVMe-oF Multipath Configuration for Pure Storage Datastores”

Digging into vSphere Workload Management Options

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

VMware Tanzu is a great way for VMware users to manage their virtual machine environments while in parallel coming up to speed with containers all under the same familiar pane of glass. In fact, that’s possibly the biggest value proposition that Tanzu gives us today: extending vSphere and all of its enterprise features and goodness to the realm of K8s in a recognizable context.

There’s a catch, though. Before one can start to use Tanzu one has to get Tanzu setup. While it’s reasonably straight forward to do so – it is also important to note that there are multiple ways to enable Tanzu and to distinguish the pros/cons of each of them. It’s also key to understand what these underlying components are and how they interact to help troubleshoot any potential problems. This post will focus on two of these methods: vCenter Network Option (HA-Proxy) and NSX-T (VMware Cloud Foundation). Cody covered the 3rd option of directly deploying Tanzu Kubernetes Grid to vSphere in an earlier post that can be found here.

It’s critical to note that you must be at a minimum of vSphere 7 before you can use either of these below methods we will cover. ESXi 6.7U3 and up is supported via the method shown in Cody’s post I linked above. The two Workload Management enablement networking options are built right into the vSphere 7 UI under Workload Management when you add a cluster:

The other important prerequisite is that you will need one or more SPBM (Storage Policy Based Management) policies defined in order to get the Supervisor Cluster up and running. There are a couple of KB articles on our Platform Guide which shows how to do this on the FlashArray and the differences between VMFS and vVols based policies.

So let’s lay out the key components and differences between the vCenter Server Network and NSX-T backed Workload Management options and provide some more information on how to enable each of them…

The vCenter Server Network option allows customers to get Workload Management working in their vSphere environment by deploying a single OVA (known as HA-Proxy) and does not require any other external products like NSX-T or SDDC Manager. This OVA and associated information can be found at this link on GitHub. The main benefit of this option is the relative simplicity of getting Tanzu running since the only items you need to setup are a distributed switch portgroup to handle your Kubernetes ingress/egress ranges and the HA-Proxy OVA itself. The HA-Proxy will act as the load balancer for kubernetes traffic and provide the supervisor cluster API endpoint. The downside is that the HA-Proxy VM represents a single point of failure and larger kubernetes deployments at some point will more than likely overwhelm the CPU/memory resources available to it. This option is best for those who want to look at Tanzu in a POC/exploratory type of setup.

Here’s a quick narrated technical video I created showing how to setup Workload Management with the HA-Proxy OVA:

After Workload Management was enabled, I created another quick demo video showing how to create Namespaces and deploy a Tanzu Kubernetes Guest Cluster:

NSX-T based Workload Management is recommended to be backed/managed by a VMware Cloud Foundation deployment. This option gives you all of the enterprise grade features, resilience and lifecycle management that comes embedded with SDDC Manager. The con of this option is there is more setup work and moving pieces involved than the other Tanzu deployment choices and it requires additional licensing once the trial period expires. The key component that needs to be setup for Tanzu in particular is called an NSX-T Edge Cluster. An Edge Cluster is comprised of at least one (but really it should be two for resiliency) Edge VMs which help route and load balance network traffic from your top of rack switch to the underlying kubernetes deployments. The Edge Cluster deployment can be automated to a large extent from within SDDC Manager via a wizard.

In our lab, I went the route of manually deploying a 2 node Edge Cluster within NSX-T as this gives a better ‘under the hood’ view of how everything works together. As most customers likely do not have top of rack switch access to setup BGP, I also decided to use the static routing option within NSX-T. Here’s a video showing how to set this up end-to-end:

With the EdgeCluster built, the next step is to enable Workload Management which is shown in this video:

Hopefully this post has provided a bit of guidance and insight in terms of what Tanzu solution might be right for your environment. We are continuing to investigate and document how to best leverage the FlashArray with kubernetes so please check back here and our platform guide often for updates. Thanks for reading.

Automating FlashStack with SmartConfig and VMware Cloud Foundation

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

One of the (many) fun things we get to work on at Pure is researching and figuring out new ways to streamline things that are traditionally repetitive and time-consuming (read:  boring).  Recently, we looked at how we could go about automating the deployment of FlashStack™ end-to-end; since a traditional deployment absolutely includes some of these repetitive tasks.  Our goal is to start off with a completely greenfield FlashStack (racked, powered, cabled and otherwise completely unconfigured) and automate everything possible to end up with a fully-functional VMware environment ready for use.    After some thought, reading and discussion, we found that this goal was achievable with the combination of SmartConfig™ and VMware Cloud Foundation™. 

Automating a FlashStack deployment makes a ton of sense:  From the moment new hardware is procured and delivered to a datacenter, the race is on for it to switch from a liability to a money producing asset for the business.  Further, using SmartConfig and Cloud Foundation together is really combining two blueprint-driven solutions:  Cisco Validated Designs (CVDs) and VMware Validated Designs (VVDs).  That does a lot to take the guesswork out of building the underlying infrastructure and hypervisor layers since firmware, hardware and software versions have all been pre validated and tested by Cisco, VMware and Pure Storage.  In addition, these two tools also go through setting up these blueprints automatically via a customizable and repeatable framework.  

Once we started working through this in the lab, the following automation workflow emerged:

Along with some introduction to the key technologies in play, we have divided the in-depth deployment guide into 3 core parts.  All of these sections, including product overviews and click-by-click instructions are publicly available here on the Pure Storage VMware Platform Guide.

  1. Deploy FlashStack with ESXi via SmartConfig.  The input of this section will be factory reset Cisco hardware and the output will be a fully functional imaged/zoned/deployed UCS chassis with ESXi7 installed and ready for use with VMware Cloud Foundation.
  2. Build VMware Cloud Foundation SDDC Manager on FlashStack.  The primary input for CloudBuilder is, not ironically, the output of the work in part 1.  Specifically, ESXi hosts and their underlying infrastructure, from which we will automatically deploy a Management Domain with CloudBuilder.
  3. The last section will show how to deploy a VMware Cloud Foundation Workload Domain with Pure Storage as both Principle Storage (VMFS on FC) and Supplemental Storage (vVols).  Options such as iSCSI are covered in additional KB articles in the VMware Cloud Foundation section of the Pure Storage support site.

Post-deployment, customers will enjoy the benefits of single-click lifecycle management for the bulk of their UCS and VMware components and the ability to dynamically scale up or down their Workload Domain deployment resources independently or collectively based upon specific needs (e.g. compute/memory, network and/or storage) all from SDDC Manager.

For those who prefer a more interactive demo, I’ve recorded an in-depth overview video of this automation project followed by a four-part demo video series that shows click-by-click just how easy and fast it is to deploy a FlashStack with VMware from scratch. 

Craig Waters and I gave a Light Board session on this subject:

And this is an in-depth PowerPoint overview of the project:

Finally, this is a video series showing the end-to-end process in-depth broken into a few parts for brevity.

Revamped PowerShell Module for Pure and VMware

About 6 months ago, my esteemed colleague Barkz blogged about our path forward with PowerShell. We have an official PowerShell SDK for managing the FlashArray–but it is limited to that: doing stuff to the FlashArray.

So to add value and make managing it within context of the layers you actually manage your infrastructure from (VMware, Microsoft, etc.) we created some value-add PowerShell modules to make it easier. Barkz talks about them here:

Continue reading “Revamped PowerShell Module for Pure and VMware”

Pure Storage Plugin v3 for vRealize Orchestrator

We just released an updated plugin for vRO today that is fully certified by VMware and is available on the VMware marketplace:

Download it here.

What are the new features? Well a lot–some various bug fixes, but this is mostly about new features:

  • ActiveCluster support
  • Enhanced protection group information
  • Throughput limits
  • Volume Groups
  • Pure1 REST API integration
  • Protocol Endpoints
  • Host Personality
Continue reading “Pure Storage Plugin v3 for vRealize Orchestrator”

Announcing: Pure Storage Cloud Block Store for AWS

One of the fundamental features of the operating environment running on the FlashArray™ is the fact that the same software can run on many different hardware implementation of the FlashArray. This is one of the reasons that we can offer hardware Non-Disruptive Upgrades or when we introduce new features (even things as expansive as VVols) we can support it on older hardware. We support VVols going back to the FA 420-an array that was introduced before I joined Pure Storage® 4.5 years ago.

Furthermore, we have been having increasing conversations around the public cloud. Not just running applications in it, but moving data to and from it. DRaaS (Disaster Recovery as a Service) is an increasingly talked about use case. VMware Cloud in AWS is getting more and more attention at VMworld, and in general. We, at Pure get it. Will everything go to public cloud? No. Certainly not. Will everything stay on-premises? Also, of course not. Some customers will. Some will not at all. Many (most?) will use both in some capacity. So enabling data mobility is important.

Continue reading “Announcing: Pure Storage Cloud Block Store for AWS”

Pure Storage and VMware PowerShell Module

I see a fair amount of requests around how to do different things with VMware PowerCLI and the Pure Storage PowerShell SDK. How do I correlate a VMFS to a volume? How do I create a new VMFS? How do I expand? Etc.

To help our customers I have written a module that includes a lot of the common operations people might need to “connect” PowerCLI to our PowerShell SDK.

The module is called Cody.PureStorage.FlashArray.VMware. Continue reading “Pure Storage and VMware PowerShell Module”

Site Recovery Manager and ActiveCluster Part III: Creating Protection Groups and Recovery Plans

Now that all of the prerequisites are complete, it is time to start creating protection groups and recovery plans.

This is part 3 of this series, the earlier parts were:

Continue reading “Site Recovery Manager and ActiveCluster Part III: Creating Protection Groups and Recovery Plans”