Features I Use Regularly in Pure’s vSphere Plugin

Today I want to tell to you about what I use the vSphere plugin for regularly in my lab to hopefully help you get more value out of your existing Pure array and tools. The assumption of this guide is that you already have the vSphere plugin installed (follow this guide if you don’t currently have it installed or would like to upgrade to a more feature-rich remote plugin version). Our vSphere plugin release notes KB covers the differences between versions. If you aren’t sure what version you want, use the latest version.

Why should you care about the vSphere plugin and why would I highlight these workflows for you? Pure’s vSphere plugin can save you a significant amount of time in the configuration/management of your vSphere+FlashArray environment. It can also greatly reduce the barriers to success in your projects by reducing the steps required of the administrator for successful completion of a workflow. Additionally, you might currently be using the vSphere plugin for a couple of workflows but didn’t realize all of the great work our engineers have put into making your life easier.

I am planning to write more blogs on the vSphere plugin and the next one I plan to write is on the highest value features that exist in current vSphere plugin versions.

Create and Manage FlashArray Hosts and Host Group Objects

If you’re currently a Pure customer, you have likely managed your host and host group objects directly from the array. Did you know you can also do this from the vSphere plugin without having to copy over WWNs/IPs manually? (1) Right-click on the ESXi cluster you want to create/manage a host or host group object on, (2) hover over Pure Storage, then (3) left-click on Add/Update Host Group.

In this menu, there are currently Fibre Channel and iSCSI protocol configuration options. We are currently exploring options here for NVMe-oF configuration; stay tuned by following this KB. You can also check a box to configure your ESXi hosts for Pure’s best practices with iSCSI, making it so you don’t have to manually configure new iSCSI ESXi hosts.

FlashArray VMFS Datastore and Volume Management (Creation and Deletion)

There are a lot of options for VMFS volume management in the plugin. I’ll only cover the basics: creation and deletion.

When you use the plugin for datastore creation, the plugin will create the appropriate datastore in vSphere, the volume on the FlashArray, and it will connect the volume to the appropriate host(s) and host group objects on the FlashArray. (1) Right-click on the pertinent cluster or host object in vSphere, (2) hover over Pure Storage and finally (3) left-click on Create Datastore. This will bring up a wizard with a lot of options that I won’t cover here, but the end result will be a datastore that has a FlashArray volume backing it.

The great thing about deleting a datastore from the plugin is that there are no additional steps required on the array to clean up the objects. This is the most satisfying workflow for me personally because cleanup in a lab can feel like it’s not a good use of time until I’ve got hundreds of objects worth cleaning up. This workflow enables me to quickly clean up every time after I’ve completed testing instead of letting this work pile up.

(1) Right-click the datastore you want to delete, (2) hover over Pure Storage and (3) left-click on Destroy Datastore. After the confirmation prompt, the FlashArray volume backing that datastore will be destroyed and is pending eradication for whatever that value is configured on the FlashArray (default 24 hours, configurable up to 30 days with SafeMode). That’s it!

FlashArray Snapshot Creation

One of the benefits of FlashArray is its portable and lightweight snapshots. The good news is that you can create these directly from vSphere without having to log into the FlashArray. It’s worth mentioning that although the snapshot recovery workflows built into the vSphere plugin (vVols and VMFS) are far more powerful and useful when you really need them, I’m covering what I use regularly and I rarely have to recover from snapshots in my lab. I try to take snapshots every time I make a major change to my environment in case I need to quickly roll-back.

There are two separate workflows for snapshot creation: one for VMFS and one for vVols. The granularity advantage with vVols over VMFS is very clear here- with VMFS, you are taking snapshots of the entire VMFS datastore, no matter how many VMs or disks are attached to those VMs. With vVols, you only have to snapshot the volumes you need to, as granular as a single disk attached to a single VM.

With VMFS, (1) right click on the datastore, (2) hover over Pure Storage and (3) left click on Create Snapshot.

For a vVols backed disk, from the Virtual Machine Configure tab, navigate to the Pure Storage – Virtual Volumes pane, (1) select the disk you would like to snapshot and (2) click Create Snapshot.

A prompt will pop up to add a suffix to the snapshot if you’d like; click on create and you’ve got your FlashArray snapshot of a vVols backed disk created!

Stay tuned for a blog on the vSphere plugin features you might not know about that, like the above, can save you a significant amount of time and effort.

Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF
Continue reading “Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)”

Pure Storage vSphere Remote Plugin™ 5.1.0 launch: vVol Point-in-Time Recovery

We are excited to announce the launch of the latest version of Pure Storage’s remote vSphere plugin, 5.1.0. It includes a number of bug fixes PLUS a highly sought after feature: vVols VM point-in-time (PiT) recovery!

Why am I excited about this feature?

With vVol PiT VM recovery, you can now easily recover an entire VM that was accidentally deleted (and eradicated) or you can restore the state of a VM back to a point in time that you took a snapshot from vCenter directly while using Pure’s vSphere plugin.

The requirements of this are Pure’s vSphere remote plugin 5.1.0 and Purity™ 6.2.6 or higher for PiT revert and for PiT VM undelete with a vVol VM that has had its FlashArray™ volumes eradicated from the FlashArray itself. If you’re undeleting a vVol VM that has not been eradicated yet, that functionality is present for Purity versions 6.1 and lower.

For PiT VM revert, you will also need to make sure that you have snapshots of all of the volumes associated with the vVol VM except swap- at least one data volume and one configuration volume.

For VM undelete before the volumes have been eradicated, you will need a snapshot of the vVol VM’s configuration volume.

For VM undelete after the vVol-backed VM has been eradicated, you’ll need a FlashArray protection group snapshot of all the VM’s data volumes, managed snapshots and configuration volumes.

Rather than rehash what my teammate Alex Carver has put a lot of work into, I’m just going to link to the KB and videos he created:

Download the new plugin (part of Pure’s OVA), read the release notes and test out vVol PiT recovery today! Like a lot of things, it’s better to have some understanding of what’s happening and why before needing something that might be part of your recovery process. Please note that you can also upgrade in-place from 5.0.0 to 5.1.0 (and future remote plugin releases) by following this guide.

Native Pure Storage FlashArray™ File Replication – Purity 6.3


With the release of Purity 6.3, Native FA File replication has been added to the Pure Storage FlashArray™ software. This adds an often important feature to the FA File folder redirection solution I wrote about last year. Pure Storage is referring to this feature as ActiveDR for File Services.

ActiveDR for File Services is a useful feature if you’ve set up or are going to set up folder redirection on FA File and you would like the file data to be replicated asynchronously to a different array, whether that FlashArray hardware is at the same site or a different one. This feature is included with FlashArray.

This allows you to use your FlashArray for native block and file workloads that need the protection that replication provides and allow you to benefit from the great data reduction rate that FlashArray is known for with those replicated file sets.

Now, if you lose a site or an array for some reason, the file workload you have hosted on FA File can be recovered natively on FlashArray easily and quickly.

There are some differences between file and block workloads when it comes to ActiveDR replication. You can read more in the ActiveDR for File Services section of this Pure KB.

vSphere Remote Plugin: .local vCenter Domains

Hello- Nelson Elam here! I’m a VMware Solutions Engineer at Pure Storage and wanted to make you aware of an issue we’ve seen crop up a couple of times recently with our vSphere Remote Plugin and provide a quick explanation.

If your vCenter uses a .local domain (vcenter.purestorage.local is one example), you might have seen the following 3 errors in Pure’s vSphere Remote Plugin in vCenter:

  1. In the FlashArray list page, the error “Error retrieving array list. Please try again later.” is returned.
    clipboard_eaaa133673603ef70ba1a091e33b493c7.png
  2. When trying to import arrays via Pure1, the error “Authenticate with Pure1 to use this feature” is returned despite previously successful registration with Pure1 through the plugin.
  3. When adding an array manually, a “no permissions” error is returned.

Resolution:
To resolve this, follow step 14 from the Online Deployment Procedure for the remote plugin by running this command after customizing it to your environment:
pureuser@purestorage-vmware-appliance:~$ puredns setattr --search {your .local domain} --nameservers {ip or FQDN of DNS server}

So what’s going on here? When the OVA where you deployed the Remote vSphere Plugin tries to reach out to your vCenter with a .local domain suffix, it can’t resolve the DNS address unless you’ve provided the appropriate search domain for the OVA and will return different errors depending on where you are trying to interact with it in vCenter.

Luckily this is a simple fix despite the seemingly unrelated errors that pop up. Hopefully this was helpful!

NVMe-oF Multipath Configuration for Pure Storage Datastores

Hello- my name is Nelson Elam and I’m a Solutions Engineer at Pure Storage. I am guest writing this blog for use on Cody’s website. I hope you find it helpful!

With the introduction of Purity 6.1, Pure now supports NVMe-oF via Fibre Channel, otherwise known as NVMe/FC. For VMware configurations with multipathing, there are some important considerations. Please note that these multipathing recommendations apply to both NVMe-RoCE and NVMe/FC.

Continue reading “NVMe-oF Multipath Configuration for Pure Storage Datastores”

Digging into vSphere Workload Management Options

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

VMware Tanzu is a great way for VMware users to manage their virtual machine environments while in parallel coming up to speed with containers all under the same familiar pane of glass. In fact, that’s possibly the biggest value proposition that Tanzu gives us today: extending vSphere and all of its enterprise features and goodness to the realm of K8s in a recognizable context.

There’s a catch, though. Before one can start to use Tanzu one has to get Tanzu setup. While it’s reasonably straight forward to do so – it is also important to note that there are multiple ways to enable Tanzu and to distinguish the pros/cons of each of them. It’s also key to understand what these underlying components are and how they interact to help troubleshoot any potential problems. This post will focus on two of these methods: vCenter Network Option (HA-Proxy) and NSX-T (VMware Cloud Foundation). Cody covered the 3rd option of directly deploying Tanzu Kubernetes Grid to vSphere in an earlier post that can be found here.

It’s critical to note that you must be at a minimum of vSphere 7 before you can use either of these below methods we will cover. ESXi 6.7U3 and up is supported via the method shown in Cody’s post I linked above. The two Workload Management enablement networking options are built right into the vSphere 7 UI under Workload Management when you add a cluster:

The other important prerequisite is that you will need one or more SPBM (Storage Policy Based Management) policies defined in order to get the Supervisor Cluster up and running. There are a couple of KB articles on our Platform Guide which shows how to do this on the FlashArray and the differences between VMFS and vVols based policies.

So let’s lay out the key components and differences between the vCenter Server Network and NSX-T backed Workload Management options and provide some more information on how to enable each of them…

The vCenter Server Network option allows customers to get Workload Management working in their vSphere environment by deploying a single OVA (known as HA-Proxy) and does not require any other external products like NSX-T or SDDC Manager. This OVA and associated information can be found at this link on GitHub. The main benefit of this option is the relative simplicity of getting Tanzu running since the only items you need to setup are a distributed switch portgroup to handle your Kubernetes ingress/egress ranges and the HA-Proxy OVA itself. The HA-Proxy will act as the load balancer for kubernetes traffic and provide the supervisor cluster API endpoint. The downside is that the HA-Proxy VM represents a single point of failure and larger kubernetes deployments at some point will more than likely overwhelm the CPU/memory resources available to it. This option is best for those who want to look at Tanzu in a POC/exploratory type of setup.

Here’s a quick narrated technical video I created showing how to setup Workload Management with the HA-Proxy OVA:

After Workload Management was enabled, I created another quick demo video showing how to create Namespaces and deploy a Tanzu Kubernetes Guest Cluster:

NSX-T based Workload Management is recommended to be backed/managed by a VMware Cloud Foundation deployment. This option gives you all of the enterprise grade features, resilience and lifecycle management that comes embedded with SDDC Manager. The con of this option is there is more setup work and moving pieces involved than the other Tanzu deployment choices and it requires additional licensing once the trial period expires. The key component that needs to be setup for Tanzu in particular is called an NSX-T Edge Cluster. An Edge Cluster is comprised of at least one (but really it should be two for resiliency) Edge VMs which help route and load balance network traffic from your top of rack switch to the underlying kubernetes deployments. The Edge Cluster deployment can be automated to a large extent from within SDDC Manager via a wizard.

In our lab, I went the route of manually deploying a 2 node Edge Cluster within NSX-T as this gives a better ‘under the hood’ view of how everything works together. As most customers likely do not have top of rack switch access to setup BGP, I also decided to use the static routing option within NSX-T. Here’s a video showing how to set this up end-to-end:

With the EdgeCluster built, the next step is to enable Workload Management which is shown in this video:

Hopefully this post has provided a bit of guidance and insight in terms of what Tanzu solution might be right for your environment. We are continuing to investigate and document how to best leverage the FlashArray with kubernetes so please check back here and our platform guide often for updates. Thanks for reading.

Automating FlashStack with SmartConfig and VMware Cloud Foundation

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

One of the (many) fun things we get to work on at Pure is researching and figuring out new ways to streamline things that are traditionally repetitive and time-consuming (read:  boring).  Recently, we looked at how we could go about automating the deployment of FlashStack™ end-to-end; since a traditional deployment absolutely includes some of these repetitive tasks.  Our goal is to start off with a completely greenfield FlashStack (racked, powered, cabled and otherwise completely unconfigured) and automate everything possible to end up with a fully-functional VMware environment ready for use.    After some thought, reading and discussion, we found that this goal was achievable with the combination of SmartConfig™ and VMware Cloud Foundation™. 

Automating a FlashStack deployment makes a ton of sense:  From the moment new hardware is procured and delivered to a datacenter, the race is on for it to switch from a liability to a money producing asset for the business.  Further, using SmartConfig and Cloud Foundation together is really combining two blueprint-driven solutions:  Cisco Validated Designs (CVDs) and VMware Validated Designs (VVDs).  That does a lot to take the guesswork out of building the underlying infrastructure and hypervisor layers since firmware, hardware and software versions have all been pre validated and tested by Cisco, VMware and Pure Storage.  In addition, these two tools also go through setting up these blueprints automatically via a customizable and repeatable framework.  

Once we started working through this in the lab, the following automation workflow emerged:

Along with some introduction to the key technologies in play, we have divided the in-depth deployment guide into 3 core parts.  All of these sections, including product overviews and click-by-click instructions are publicly available here on the Pure Storage VMware Platform Guide.

  1. Deploy FlashStack with ESXi via SmartConfig.  The input of this section will be factory reset Cisco hardware and the output will be a fully functional imaged/zoned/deployed UCS chassis with ESXi7 installed and ready for use with VMware Cloud Foundation.
  2. Build VMware Cloud Foundation SDDC Manager on FlashStack.  The primary input for CloudBuilder is, not ironically, the output of the work in part 1.  Specifically, ESXi hosts and their underlying infrastructure, from which we will automatically deploy a Management Domain with CloudBuilder.
  3. The last section will show how to deploy a VMware Cloud Foundation Workload Domain with Pure Storage as both Principle Storage (VMFS on FC) and Supplemental Storage (vVols).  Options such as iSCSI are covered in additional KB articles in the VMware Cloud Foundation section of the Pure Storage support site.

Post-deployment, customers will enjoy the benefits of single-click lifecycle management for the bulk of their UCS and VMware components and the ability to dynamically scale up or down their Workload Domain deployment resources independently or collectively based upon specific needs (e.g. compute/memory, network and/or storage) all from SDDC Manager.

For those who prefer a more interactive demo, I’ve recorded an in-depth overview video of this automation project followed by a four-part demo video series that shows click-by-click just how easy and fast it is to deploy a FlashStack with VMware from scratch. 

Craig Waters and I gave a Light Board session on this subject:

And this is an in-depth PowerPoint overview of the project:

Finally, this is a video series showing the end-to-end process in-depth broken into a few parts for brevity.

Revamped PowerShell Module for Pure and VMware

About 6 months ago, my esteemed colleague Barkz blogged about our path forward with PowerShell. We have an official PowerShell SDK for managing the FlashArray–but it is limited to that: doing stuff to the FlashArray.

So to add value and make managing it within context of the layers you actually manage your infrastructure from (VMware, Microsoft, etc.) we created some value-add PowerShell modules to make it easier. Barkz talks about them here:

Continue reading “Revamped PowerShell Module for Pure and VMware”

Pure Storage Plugin v3 for vRealize Orchestrator

We just released an updated plugin for vRO today that is fully certified by VMware and is available on the VMware marketplace:

Download it here.

What are the new features? Well a lot–some various bug fixes, but this is mostly about new features:

  • ActiveCluster support
  • Enhanced protection group information
  • Throughput limits
  • Volume Groups
  • Pure1 REST API integration
  • Protocol Endpoints
  • Host Personality
Continue reading “Pure Storage Plugin v3 for vRealize Orchestrator”