Cloud Native Storage Part 1: Storage Policy Configuration

Previous series on Tanzu setup:

  1. Tanzu Kubernetes 1.2 Part 1: Deploying a Tanzu Kubernetes Management Cluster
  2. Tanzu Kubernetes 1.2 Part 2: Deploying a Tanzu Kubernetes Guest Cluster
  3. Tanzu Kubernetes 1.2 Part 3: Authenticating Tanzu Kubernetes Guest Clusters with Kubectl

The next step here is storage. I want to configure an ability to provision persistent storage in Tanzu Kubernetes. Storage is generally managed and configured through a specification called the Container Storage Interface (CSI). CSI is a specification created to provide a consistent experience in an orchestrated container environment for storage provisioning and management. There are a ton of different storage types (SAN, NAS, DAS, SDS, Cloud, etc. etc.) from 100x that in vendors. Management and interaction with all of them is different. Many people deploying and managing containers are not experts in any of these, and do not have the time nor the interest in learning them. And if you change storage vendors do you want to have to change your entire practice in k8s for it? Probably not.

So CSI takes some proprietary storage layer and provides an API mapping:

https://github.com/container-storage-interface/spec/blob/master/spec.md

Vendors can take that and build a CSI driver that manages their storage but provides a consistent experience above it.

At Pure Storage we have our own CSI driver for instance, called Pure Service Orchestrator. Which I will get to in a later series. For now, lets get into VMware’s CSI driver. VMware’s CSI driver is part of a whole offering called Cloud Native Storage.

https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md

https://blogs.vmware.com/virtualblocks/2019/08/14/introducing-cloud-native-storage-for-vsphere/

This has two parts, the CSI driver which gets installed in the k8s nodes, and the CNS control plane within vSphere itself that does the selecting and provisioning of storage. This requires vSphere 6.7 U3 or later. A benefit of using TKG is that the various CNS components come pre-installed.

Continue reading “Cloud Native Storage Part 1: Storage Policy Configuration”

vVols, please report to the Principal’s Office! VCF 4.1 and vVols!

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

In VMware Cloud Foundation (VCF) version 4.1, vVols have taken center stage as a Principal Storage type available for Workload Domain deployments.  This inclusion in one of VMware’s premier products reinforces the continued emphasis on vVols and all the benefits that they enable from VMware.  vVols with iSCSI is particularly exciting to us as this is the first instance of the iSCSI protocol being supported as a Principal Storage type within VCF.  We at Pure Storage are honored to have had a little bit of influence over this added functionality by serving as a design partner for this new feature and we are confident you are going to like what you see!

Someone who is using VMFS datastore with VCF today might ask themselves ‘why vVols’? This is a great question deserving of an expansive answer beyond this blog post.  Fundamentally, though, using vVols enables you to fully use the FlashArray in the way it was intended.  By leverage VASA (VMware API for Storage Awareness) you gain far more granular control and monitoring abilities over your individual VMs.  Native FlashArray capabilities such as snapshots and replication are directly executed against the underlying array via policy-driven constructs.  Further information on these and other benefits with vVols are available here.

Using vVols as Principal Storage is a lot like the methods VCF customers are used to for pre-existing Principal Storage options.  Image an ESXi host, apply a few prerequisites to it, commission it to SDDC manager and create Workload Domains.  Deploying Workload Domains with VMware Cloud Foundation automates and takes all the guesswork out of deploying vCenter and NSX-T for modern use cases such as Kubernetes via Workload Management

Stepping into some specifics for a moment; here’s the process on how to use FlashArray iSCSI and vVols for VCF Workload Domains:

The most fundamental update to SDDC Manager to allow vVols is the capability to register a VASA Provider.  In the below screenshot and following detailed information, we show an example of how you can add a FlashArray using another block protocol:  Fibre Channel:

  1. Provide a descriptive name for the VASA provider.  It is recommended to use the FlashArray name and append it with -ct0 or -ct1 to denote which controller the entry is associated with.
  2. Provide the URL for the VASA provider.  This cannot be the management VIP of the array.  Instead this field needs to be the management IP address associated with one of the controllers.  The URL also is required to have the VASA port and version.xml appended to it.  The format for the URL is:  https://<IP of FlashArrayController>:8084/version.xml
  3. Give a FlashArray user name with the arrayadmin role.  The procedure for how to create such a user can be found here.  While the pureuser account can be used, we recommend creating and using a separate FlashArray user for VASA operations.
  4. Provide the password for the FlashArray username to be used.
  5. Container Name must be Vvol container.  Note that this value is case-sensitive.
  6. For Container Type, select FC from the drop-down menu to use Fibre Channel.
  7. Once all entries are completed, click Save.

Obviously, there’s a lot more to share here so we will be expanding on this substantially in the very near future on our VMware Platform Guide site.

Rounding out this post, I’m happy to show a demo video of just how easy it is to deploy a FC+vVols-based Workload Domain with VMware Cloud Foundation.

Automating FlashStack with SmartConfig and VMware Cloud Foundation

Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

One of the (many) fun things we get to work on at Pure is researching and figuring out new ways to streamline things that are traditionally repetitive and time-consuming (read:  boring).  Recently, we looked at how we could go about automating the deployment of FlashStack™ end-to-end; since a traditional deployment absolutely includes some of these repetitive tasks.  Our goal is to start off with a completely greenfield FlashStack (racked, powered, cabled and otherwise completely unconfigured) and automate everything possible to end up with a fully-functional VMware environment ready for use.    After some thought, reading and discussion, we found that this goal was achievable with the combination of SmartConfig™ and VMware Cloud Foundation™. 

Automating a FlashStack deployment makes a ton of sense:  From the moment new hardware is procured and delivered to a datacenter, the race is on for it to switch from a liability to a money producing asset for the business.  Further, using SmartConfig and Cloud Foundation together is really combining two blueprint-driven solutions:  Cisco Validated Designs (CVDs) and VMware Validated Designs (VVDs).  That does a lot to take the guesswork out of building the underlying infrastructure and hypervisor layers since firmware, hardware and software versions have all been pre validated and tested by Cisco, VMware and Pure Storage.  In addition, these two tools also go through setting up these blueprints automatically via a customizable and repeatable framework.  

Once we started working through this in the lab, the following automation workflow emerged:

Along with some introduction to the key technologies in play, we have divided the in-depth deployment guide into 3 core parts.  All of these sections, including product overviews and click-by-click instructions are publicly available here on the Pure Storage VMware Platform Guide.

  1. Deploy FlashStack with ESXi via SmartConfig.  The input of this section will be factory reset Cisco hardware and the output will be a fully functional imaged/zoned/deployed UCS chassis with ESXi7 installed and ready for use with VMware Cloud Foundation.
  2. Build VMware Cloud Foundation SDDC Manager on FlashStack.  The primary input for CloudBuilder is, not ironically, the output of the work in part 1.  Specifically, ESXi hosts and their underlying infrastructure, from which we will automatically deploy a Management Domain with CloudBuilder.
  3. The last section will show how to deploy a VMware Cloud Foundation Workload Domain with Pure Storage as both Principle Storage (VMFS on FC) and Supplemental Storage (vVols).  Options such as iSCSI are covered in additional KB articles in the VMware Cloud Foundation section of the Pure Storage support site.

Post-deployment, customers will enjoy the benefits of single-click lifecycle management for the bulk of their UCS and VMware components and the ability to dynamically scale up or down their Workload Domain deployment resources independently or collectively based upon specific needs (e.g. compute/memory, network and/or storage) all from SDDC Manager.

For those who prefer a more interactive demo, I’ve recorded an in-depth overview video of this automation project followed by a four-part demo video series that shows click-by-click just how easy and fast it is to deploy a FlashStack with VMware from scratch. 

Craig Waters and I gave a Light Board session on this subject:

And this is an in-depth PowerPoint overview of the project:

Finally, this is a video series showing the end-to-end process in-depth broken into a few parts for brevity.

Extending vVols to VMware Cloud Foundation

Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem.  Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains. 

First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table.  Fortunately, it is very easy to distinguish between the two storage types:

Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager.  Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below.  We’ve shown how to use VMFS on FC previously.

Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed.  Examples of this storage type today include iSCSI and the focus of this blog:  vVols.

Continue reading “Extending vVols to VMware Cloud Foundation”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”

VMware Cloud Foundation and Pure Storage

A few weeks ago, VMware released the latest version of VMware Cloud Foundation, version 3.9. There were more than a few things in this release but one enhancement was around Fibre Channel:

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.9/rn/VMware-Cloud-Foundation-39-Release-Notes.html

The release notes mention:

Fibre Channel Storage as Principal Storage: Virtual Infrastructure (VI) workload domains now support Fibre Channel as a principal storage option in addition to VMware vSAN and NFS.

vCF 3.9 Release Notes

So this leads to three questions:

  • What does this mean?
  • What does this mean for non-FC storage?
  • What was the stance around FC storage BEFORE this release?

Let’s answer the first question first.

Continue reading “VMware Cloud Foundation and Pure Storage”

VMworld Europe 2019 and Pure Storage

It’s that time of the year again…again! Pure Storage is back at VMworld in Barcelona.

Before we get into what’s happening at Barcelona, let’s recap a bit what happened in the US conference. As the best way to look to the future sometimes is to analyze the past.

Also check out this panel session I did with Rubrik on VMworld US.

  • VMware Cloud on AWS was once again a major topic–this is increasingly getting more attention and something we are paying a close eye on. The most important step around this for Pure is our new offering that is now fully GA, called Cloud Block Store. Our FlashArray software (Purity) now fully running AWS. See my posts here and here on that.
  • VMware Cloud Foundations. This is the basis for pretty much all automated VMware stacks–SDDC manager allows you to deploy vCenter, NSX, vRealize (etc. etc.) and of course their lifecycles. SDDC Manager (the management point of vCF) provides the ability to create on “management” domain–this is where all of the VMware services are deployed, and also one or more “workload” domains. Workloads domains are basically a new vCenter server–which gets hooked in via ELM. When deploying a workload domain the storage options used to be only be vSAN or NFS. You could then add block after the fact. In the 3.9.0.0 release, you can now choose Fibre Channel storage as the option. Check out our KB here on it. I expect to hear more about this in Barcelona.
  • Containers, K8s, and more containers. VMware’s work since the Heptio acquisition has not slowed down. I would be fairly comfortable saying that the announcement of Project Pacific and Project Tanzu were the talk of the town during and certainly after VMworld. I have no doubt this will bubble up more in Barcelona. The use case around First Class Disks and vVols I think is particularly intriguing.
  • vRealize Automation Cloud and vRealize 8.0. vRealize Automation 8.0 is now GA. There are two major things here to unpack. First VMware Cloud Automation Services was renamed to vRealize Automation Cloud. This in and of itself doesn’t mean anything (VMware loves to name things) but what is actually important about that is the “traditional” vRealize Automation set. vRealize Automation 8 is now entirely based on the features/design/architecture of vRealize Automation Cloud. Meaning that what vRAC offers is what vRA on-premises offers (same tools, integrations, features). This makes choosing between the two easier (one question to ask, do I want to host it, or do I want VMware to?). I expect details on this to be expanded in Barcelona.
  • vVols. Did you think I wouldn’t bring this up?! Of course I would. vVols is coming back in a big way, and in no small part due to VMware’s renewed push on vVols in their products and with their partners. The automation, integration, and benefits of vVols are making more and more sense these days. VMware gets that, and so do the storage partners. A major topic around vVols is Site Recovery Manager support. Expect to see more vendors talking about that as they furiously work on vVol replication support.
Continue reading “VMworld Europe 2019 and Pure Storage”

Installing the Pure Storage vSphere Plugin with vRealize Orchestrator

A few months back I published a new PowerShell cmdlet for installing the Pure Storage vSphere Client Plugin. Get-PfavSpherePlugin and Install-PfavSpherePlugin. This works quite well and we’ve had a fair amount of use of it so far, but another place we are certainly investing in right now is vRealize Orchestrator and continuing to enhance our plugin. Filling in any gaps around workflows and actions, especially on initializing an environment is important.

One of those gaps was installing the vSphere Plugin. One common use case we have seen around vRO is day 0 config, but day 2 stuff is done in vCenter (deploying VMs, datastores, etc). So the vSphere Plugin comes in handy here. So how do I install it from vRO?

Continue reading “Installing the Pure Storage vSphere Plugin with vRealize Orchestrator”

Demystifying IO Operation Readouts in ESXi

This doesn’t come up very often these days, but every once and awhile it does and every time it does, I look to see if we have documentation on it and there never seems to be. After writing this post I did find a forum post where my friend Drew answers it there too. Well anyways let’s quickly explain the situation.

Most block vendors these days tell customers to change their path switching policy for their storage in ESXi from the default of Round Robin (1,000) to 1. This makes ESXi switches logical paths for a given device after every I/O instead of every 1,000. The reason I say this doesn’t come up much anymore is that in modern version of ESXi (6.0 express patch+, 6.5 U1+ and 6.7+) we (Pure) have rules in ESXi that makes sure this is set by default without any user configuration. Many other vendors do as well.

Anyways, when using VMware tools to see if a device is configured properly, depending on how it is set, it can readout differently.

Continue reading “Demystifying IO Operation Readouts in ESXi”

FlashArray HTML-5 vSphere Client Plugin VVol Support

Not long ago I posted about our initial release of our vSphere Plugin that supports the HTML-5 UI–the main problem though is that it did not yet support the VVol stuff we put in the original flash/flex based plugin.

So accordingly, the most common question I received was “when are you adding VVol support to this one?”. And my response was “Soon! We are working on it”.

Continue reading “FlashArray HTML-5 vSphere Client Plugin VVol Support”