Pure Storage Plugin for the vSphere Client 4.5.0 Release

Howdy doody folks. Lots of releases coming down the pipe in short order and the latest is well the latest release of the Pure Storage Plugin for the vSphere Client. This may be our last release of it in this architecture (though we may have one or so more depending on things) in favor of the new preferred client-side architecture that VMware released in 6.7. Details on that here if you are curious.

Anyways, what’s new in this plugin?

The release notes are here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/Release_Notes_for_VMware_Solutions/Release_Notes%3A_Pure_Storage_Plugin_for_the_vSphere_Client#4.5.0_Release_Notes

But in short, five things:

  1. Improved protection group import wizard. This feature pulls in FlashArray protection groups and converts them into vVol storage policies. This was, rudimentary at best previously, and is now a full-blown, much more flexible wizard.
  2. Native performance charts. Previously performance charts for datastores (where we showed FlashArray performance stats in the vSphere Client) was actually an iframe we pulled from our GUI. This was a poor decision. We have re-done this entirely from the ground up and now pull the stats from the REST API and draw them natively using the Clarity UI. Furthermore, there are now way more stats shown too.
  3. Datastore connectivity management. A few releases ago we added a feature to add an existing datastore to new compute, but it wasn’t particularly flexible and it wasn’t helpful if there were connectivity issues and didn’t provide good insight into what was already connected. We now have an entirely new page that focuses on this.
  4. Host management. This has been entirely revamped. Initially host management was laser focused on one use case: connecting a cluster to a new FlashArray. But no ability to add/remove a host or make adjustments. And like above, no good insight into current configuration. The host and cluster objects now have their own page with extensive controls.
  5. vVol Datastore Summary. This shows some basic information around the vVol datastore object

First off how do you install? The easiest method is PowerShell. See details (and other options) here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/Using_the_Pure_Storage_Plugin_for_the_vSphere_Client/vSphere_Plugin_User_Guide%3A_Installing_the_vSphere_Plugin

Continue reading “Pure Storage Plugin for the vSphere Client 4.5.0 Release”

Managing vVol Storage Policies with PowerShell

I just posted about some new cmdlets here:

Also in that release are a few more cmdlets concerning storage policy creation, editing, and assignment. They were built to make the process easier–the original cmdlets and their use is certainly an option–and for very specific things you might want to do they might be necessary, but the vast majority of common operations can be more easily achieved with these.

As always, to install run:

Install-Module PureStorage.FlashArray.VMware

Or to upgrade:

Update-Module PureStorage.FlashArray.VMware

These modules are open source, so if you just want to use my code or open an RFE or issue go here:

https://github.com/PureStorage-OpenConnect/PureStorage.FlashArray.VMware/

For detailed help on a cmdlet, run Get-Help

Continue reading “Managing vVol Storage Policies with PowerShell”

New vVol Replication PowerShell Cmdlets

Happy New Year everyone! Let’s work to make 2021 a better year.

In furtherance of that goal, I have put out a few new vVol-related PowerShell cmdlets! So baby steps I guess.

The following are the new cmdlets:

Basics:

  • Get-PfaVvolStorageArray

Replication:

  • Get-PfaVvolReplicationGroup
  • Get-PfaVvolReplicationGroupPartner
  • Get-PfaVvolFaultDomain

Storage Policy Management:

  • Build-PfaVvolStoragePolicyConfig
  • Edit-PfaVvolStoragePolicy
  • Get-PfaVvolStoragePolicy
  • New-PfaVvolStoragePolicy
  • Set-PfaVvolVmStoragePolicy

Now to walk through how to use them. This post will talk about the basics and the replication cmdlets. The next post will talk about the profile cmdlets.

Continue reading “New vVol Replication PowerShell Cmdlets”

Cloud Native Storage Part 1: Storage Policy Configuration

Previous series on Tanzu setup:

  1. Tanzu Kubernetes 1.2 Part 1: Deploying a Tanzu Kubernetes Management Cluster
  2. Tanzu Kubernetes 1.2 Part 2: Deploying a Tanzu Kubernetes Guest Cluster
  3. Tanzu Kubernetes 1.2 Part 3: Authenticating Tanzu Kubernetes Guest Clusters with Kubectl

The next step here is storage. I want to configure an ability to provision persistent storage in Tanzu Kubernetes. Storage is generally managed and configured through a specification called the Container Storage Interface (CSI). CSI is a specification created to provide a consistent experience in an orchestrated container environment for storage provisioning and management. There are a ton of different storage types (SAN, NAS, DAS, SDS, Cloud, etc. etc.) from 100x that in vendors. Management and interaction with all of them is different. Many people deploying and managing containers are not experts in any of these, and do not have the time nor the interest in learning them. And if you change storage vendors do you want to have to change your entire practice in k8s for it? Probably not.

So CSI takes some proprietary storage layer and provides an API mapping:

https://github.com/container-storage-interface/spec/blob/master/spec.md

Vendors can take that and build a CSI driver that manages their storage but provides a consistent experience above it.

At Pure Storage we have our own CSI driver for instance, called Pure Service Orchestrator. Which I will get to in a later series. For now, lets get into VMware’s CSI driver. VMware’s CSI driver is part of a whole offering called Cloud Native Storage.

https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md

https://blogs.vmware.com/virtualblocks/2019/08/14/introducing-cloud-native-storage-for-vsphere/

This has two parts, the CSI driver which gets installed in the k8s nodes, and the CNS control plane within vSphere itself that does the selecting and provisioning of storage. This requires vSphere 6.7 U3 or later. A benefit of using TKG is that the various CNS components come pre-installed.

Continue reading “Cloud Native Storage Part 1: Storage Policy Configuration”

vVols, please report to the Principal’s Office! VCF 4.1 and vVols!

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

In VMware Cloud Foundation (VCF) version 4.1, vVols have taken center stage as a Principal Storage type available for Workload Domain deployments.  This inclusion in one of VMware’s premier products reinforces the continued emphasis on vVols and all the benefits that they enable from VMware.  vVols with iSCSI is particularly exciting to us as this is the first instance of the iSCSI protocol being supported as a Principal Storage type within VCF.  We at Pure Storage are honored to have had a little bit of influence over this added functionality by serving as a design partner for this new feature and we are confident you are going to like what you see!

Someone who is using VMFS datastore with VCF today might ask themselves ‘why vVols’? This is a great question deserving of an expansive answer beyond this blog post.  Fundamentally, though, using vVols enables you to fully use the FlashArray in the way it was intended.  By leverage VASA (VMware API for Storage Awareness) you gain far more granular control and monitoring abilities over your individual VMs.  Native FlashArray capabilities such as snapshots and replication are directly executed against the underlying array via policy-driven constructs.  Further information on these and other benefits with vVols are available here.

Using vVols as Principal Storage is a lot like the methods VCF customers are used to for pre-existing Principal Storage options.  Image an ESXi host, apply a few prerequisites to it, commission it to SDDC manager and create Workload Domains.  Deploying Workload Domains with VMware Cloud Foundation automates and takes all the guesswork out of deploying vCenter and NSX-T for modern use cases such as Kubernetes via Workload Management

Stepping into some specifics for a moment; here’s the process on how to use FlashArray iSCSI and vVols for VCF Workload Domains:

The most fundamental update to SDDC Manager to allow vVols is the capability to register a VASA Provider.  In the below screenshot and following detailed information, we show an example of how you can add a FlashArray using another block protocol:  Fibre Channel:

  1. Provide a descriptive name for the VASA provider.  It is recommended to use the FlashArray name and append it with -ct0 or -ct1 to denote which controller the entry is associated with.
  2. Provide the URL for the VASA provider.  This cannot be the management VIP of the array.  Instead this field needs to be the management IP address associated with one of the controllers.  The URL also is required to have the VASA port and version.xml appended to it.  The format for the URL is:  https://<IP of FlashArrayController>:8084/version.xml
  3. Give a FlashArray user name with the arrayadmin role.  The procedure for how to create such a user can be found here.  While the pureuser account can be used, we recommend creating and using a separate FlashArray user for VASA operations.
  4. Provide the password for the FlashArray username to be used.
  5. Container Name must be Vvol container.  Note that this value is case-sensitive.
  6. For Container Type, select FC from the drop-down menu to use Fibre Channel.
  7. Once all entries are completed, click Save.

Obviously, there’s a lot more to share here so we will be expanding on this substantially in the very near future on our VMware Platform Guide site.

Rounding out this post, I’m happy to show a demo video of just how easy it is to deploy a FC+vVols-based Workload Domain with VMware Cloud Foundation.

Automating FlashStack with SmartConfig and VMware Cloud Foundation

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

One of the (many) fun things we get to work on at Pure is researching and figuring out new ways to streamline things that are traditionally repetitive and time-consuming (read:  boring).  Recently, we looked at how we could go about automating the deployment of FlashStack™ end-to-end; since a traditional deployment absolutely includes some of these repetitive tasks.  Our goal is to start off with a completely greenfield FlashStack (racked, powered, cabled and otherwise completely unconfigured) and automate everything possible to end up with a fully-functional VMware environment ready for use.    After some thought, reading and discussion, we found that this goal was achievable with the combination of SmartConfig™ and VMware Cloud Foundation™. 

Automating a FlashStack deployment makes a ton of sense:  From the moment new hardware is procured and delivered to a datacenter, the race is on for it to switch from a liability to a money producing asset for the business.  Further, using SmartConfig and Cloud Foundation together is really combining two blueprint-driven solutions:  Cisco Validated Designs (CVDs) and VMware Validated Designs (VVDs).  That does a lot to take the guesswork out of building the underlying infrastructure and hypervisor layers since firmware, hardware and software versions have all been pre validated and tested by Cisco, VMware and Pure Storage.  In addition, these two tools also go through setting up these blueprints automatically via a customizable and repeatable framework.  

Once we started working through this in the lab, the following automation workflow emerged:

Along with some introduction to the key technologies in play, we have divided the in-depth deployment guide into 3 core parts.  All of these sections, including product overviews and click-by-click instructions are publicly available here on the Pure Storage VMware Platform Guide.

  1. Deploy FlashStack with ESXi via SmartConfig.  The input of this section will be factory reset Cisco hardware and the output will be a fully functional imaged/zoned/deployed UCS chassis with ESXi7 installed and ready for use with VMware Cloud Foundation.
  2. Build VMware Cloud Foundation SDDC Manager on FlashStack.  The primary input for CloudBuilder is, not ironically, the output of the work in part 1.  Specifically, ESXi hosts and their underlying infrastructure, from which we will automatically deploy a Management Domain with CloudBuilder.
  3. The last section will show how to deploy a VMware Cloud Foundation Workload Domain with Pure Storage as both Principle Storage (VMFS on FC) and Supplemental Storage (vVols).  Options such as iSCSI are covered in additional KB articles in the VMware Cloud Foundation section of the Pure Storage support site.

Post-deployment, customers will enjoy the benefits of single-click lifecycle management for the bulk of their UCS and VMware components and the ability to dynamically scale up or down their Workload Domain deployment resources independently or collectively based upon specific needs (e.g. compute/memory, network and/or storage) all from SDDC Manager.

For those who prefer a more interactive demo, I’ve recorded an in-depth overview video of this automation project followed by a four-part demo video series that shows click-by-click just how easy and fast it is to deploy a FlashStack with VMware from scratch. 

Craig Waters and I gave a Light Board session on this subject:

And this is an in-depth PowerPoint overview of the project:

Finally, this is a video series showing the end-to-end process in-depth broken into a few parts for brevity.

Extending vVols to VMware Cloud Foundation

Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem.  Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains. 

First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table.  Fortunately, it is very easy to distinguish between the two storage types:

Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager.  Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below.  We’ve shown how to use VMFS on FC previously.

Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed.  Examples of this storage type today include iSCSI and the focus of this blog:  vVols.

Continue reading “Extending vVols to VMware Cloud Foundation”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”

VMware Cloud Foundation and Pure Storage

A few weeks ago, VMware released the latest version of VMware Cloud Foundation, version 3.9. There were more than a few things in this release but one enhancement was around Fibre Channel:

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.9/rn/VMware-Cloud-Foundation-39-Release-Notes.html

The release notes mention:

Fibre Channel Storage as Principal Storage: Virtual Infrastructure (VI) workload domains now support Fibre Channel as a principal storage option in addition to VMware vSAN and NFS.

vCF 3.9 Release Notes

So this leads to three questions:

  • What does this mean?
  • What does this mean for non-FC storage?
  • What was the stance around FC storage BEFORE this release?

Let’s answer the first question first.

Continue reading “VMware Cloud Foundation and Pure Storage”

VMworld Europe 2019 and Pure Storage

It’s that time of the year again…again! Pure Storage is back at VMworld in Barcelona.

Before we get into what’s happening at Barcelona, let’s recap a bit what happened in the US conference. As the best way to look to the future sometimes is to analyze the past.

Also check out this panel session I did with Rubrik on VMworld US.

  • VMware Cloud on AWS was once again a major topic–this is increasingly getting more attention and something we are paying a close eye on. The most important step around this for Pure is our new offering that is now fully GA, called Cloud Block Store. Our FlashArray software (Purity) now fully running AWS. See my posts here and here on that.
  • VMware Cloud Foundations. This is the basis for pretty much all automated VMware stacks–SDDC manager allows you to deploy vCenter, NSX, vRealize (etc. etc.) and of course their lifecycles. SDDC Manager (the management point of vCF) provides the ability to create on “management” domain–this is where all of the VMware services are deployed, and also one or more “workload” domains. Workloads domains are basically a new vCenter server–which gets hooked in via ELM. When deploying a workload domain the storage options used to be only be vSAN or NFS. You could then add block after the fact. In the 3.9.0.0 release, you can now choose Fibre Channel storage as the option. Check out our KB here on it. I expect to hear more about this in Barcelona.
  • Containers, K8s, and more containers. VMware’s work since the Heptio acquisition has not slowed down. I would be fairly comfortable saying that the announcement of Project Pacific and Project Tanzu were the talk of the town during and certainly after VMworld. I have no doubt this will bubble up more in Barcelona. The use case around First Class Disks and vVols I think is particularly intriguing.
  • vRealize Automation Cloud and vRealize 8.0. vRealize Automation 8.0 is now GA. There are two major things here to unpack. First VMware Cloud Automation Services was renamed to vRealize Automation Cloud. This in and of itself doesn’t mean anything (VMware loves to name things) but what is actually important about that is the “traditional” vRealize Automation set. vRealize Automation 8 is now entirely based on the features/design/architecture of vRealize Automation Cloud. Meaning that what vRAC offers is what vRA on-premises offers (same tools, integrations, features). This makes choosing between the two easier (one question to ask, do I want to host it, or do I want VMware to?). I expect details on this to be expanded in Barcelona.
  • vVols. Did you think I wouldn’t bring this up?! Of course I would. vVols is coming back in a big way, and in no small part due to VMware’s renewed push on vVols in their products and with their partners. The automation, integration, and benefits of vVols are making more and more sense these days. VMware gets that, and so do the storage partners. A major topic around vVols is Site Recovery Manager support. Expect to see more vendors talking about that as they furiously work on vVol replication support.
Continue reading “VMworld Europe 2019 and Pure Storage”