Extending vVols to VMware Cloud Foundation

Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem.  Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains. 

First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table.  Fortunately, it is very easy to distinguish between the two storage types:

Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager.  Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below.  We’ve shown how to use VMFS on FC previously.

Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed.  Examples of this storage type today include iSCSI and the focus of this blog:  vVols.

Continue reading “Extending vVols to VMware Cloud Foundation”

What’s New in vSphere 7.0 Storage Part I: vVols are all over the place!

Ah it’s time for another round of “what’s new” with vSphere external storage. Before I get into the more traditional feature version of this series, I wanted to first note some important announcements around vVols.

So the first thing that’s “new” in storage with vSphere 7.0 is that VMware is taking vVols extremely seriously now. 2018 and vVols was about spreading the value of vVols, 2019 was about getting vendors to dig in, and 2020 is about VMware and storage partners delivering on it. This is just the start.

Site Recovery Manager

This is, of course, the big one. You can check out the announcement here:

https://blogs.vmware.com/virtualblocks/2020/03/10/whats-new-srm-vr-83/

Since day 1 of SRM, array-based replication was of primary importance. SRM was essentially built to provide a common orchestration tool for disaster recovery. It automated the VMware steps of recovering virtual machines while coordinating with the underlying replication on the array to make sure the data was on site B and was ready to be used when needed. This coordination was through something called a Storage Replication Adapter (an SRA).

The fundamental problem around SRAs were the fact that it was entirely a SRM “thing”. Replication configuration and management had to be done elsewhere. It couldn’t be done natively in vSphere–best case there was a vSphere Plugin that could help, but once again that only integrated the configuration of replication into the UI, not into vSphere itself, so managing changes wasn’t scalable. Furthermore, every vendor did it differently (if they even had a plugin that could do it).

There was ZERO consistency beyond how SRM ran recovery plans. This is what vVol replication integration was designed to fix.

First off, it integrates directly with VM provisioning and policy-based management. So there is no need to install or use a plugin to manage replication protection for VMs. It is also built into vSphere itself, not just the UI. This allows it to be managed and configured however you manage vSphere (PowerCLI, vRO, vRA, Python, etc) without additional plugins.

As vVols have REALLY picked up steam in the last year. VMware has re-focused its efforts on making sure lingering issues/gaps were fixed that were preventing further vVol adoption. This is/was a common sentiment from customers:

Let’s be clear here: the stated path for VMware storage of the future is vVols and vSAN. VMware is obviously finally committing to this ideal.

So now in SRM, you can create a protection group that discovers replicated VMs not via the SRA, but by querying the vSphere API directly for vVol replication groups.

So you add vVol replication groups directly to SRM protection group–very similar in concept via datastore groups via SRA-based policies.

When you choose a SPBM policy for a given VM–you then choose a replication group (if it is a replication type policy). As you add (or remove) VMs to the replication group, they will be automatically protected by SRM (or unprotected). Further integrating the process into SPBM.

Stay tuned for a lot more on this!

vRealize Operations Manager

vRealize Operations Manager (vROps) is a fantastic tool for datacenter trending, analysis, balancing, monitoring, etc. Many vendors have what is called a management pack which integrates their specific objects,metrics, and alerts into vROps so it can be associated with their various related VMware objects (and their metrics, alerts, and their own related objects).

When it came to vVols, there was a gap–vROps didnt quite know how to understand a vVol datastore. Therefore it didn’t know how to relate VMs and their disks. Therefore the vendor couldnt really relate them to their storage objects. So any vVol integration by vendors there was at best half done.

So in vROps 8.1 the vVol datastore exists:

Image

This opens up a whole new world of storage management packs! I’m very excited to build more onto our management pack to take advantage of this final connector we needed!

vSphere with Kubernetes

Project Pacific no more! There are a lot of places to get more information on this, though a great place to get a start is here:

In short, tightly integrating K8s into vSphere. Manage and control your containers/K8s pods as a 1st class citizen, just like your VMs of yore.

Persistent storage is presented through the VMware CSI driver, called CNS (Cloud Native Storage). CNS uses existing storage options for storage provisioning, but in a new way. First it is based of of Storage Policy Based Manager (your storage classes for CSI provisioning are based on policies) furthermore, it uses first class disks instead of standard disks which I talk about here:

They are just virtual disks, but in the API they are 1st class objects–they can be created and exist independently of a VM. Which makes sense for something that is not a VM (or more to the point something that might not be as persistent as a VM) like a container.

FCDs can be created, snapshotted, resized, etc just like a virtual disk but without a VM to own it. Sounds a lot like a persistent volume claim!

vVols + FCDs make this story even better, because configuration is controlled in policies (get, set, check) and the volume is a 1st class object on the array too. On the FlashArray, since vVols are just volumes if that persistent volume claim (that volume) is in use in a non-VMware K8s environment it should be easily imported into vSphere with Kubernetes through a vVol FCD. Look for more information as we build out documentation and tools around this.

Very excited about the future of this!

VMware Cloud Foundations

The mother of all VMware automation. I blogged about it while ago here:

This is becoming more and more important and VMware is improving it to have better storage integration into SDDC manager as shown above. VMware has announced partner support of vVols as supplementary storage (we will have documentation on that very soon) which is just the start.

This is just the start to vVols in 2020! Stay tuned!

Default FlashArray Connection With PowerShell

In the VMware Pure PowerShell module (PureStorage.FlashArray.VMware) there is a default array connection stored in a global variable called $Global:DefaultFlashArray and all connected FlashArrays in $Global:AllFlashArrays. The VMware/Pure PowerShell module automatically uses what is in the “default” variable.

The underlying “core” Pure Storage PowerShell module (PureStoragePowerShellSDK) does not yet take advantage of global connections. So for each cmdlet you run, you must pass in the “array” parameter. For example to get all of the volumes from an array:

Kind of annoying if you are interactively running commands and only have one array connection you care about (or one that you primarily care about).

Continue reading “Default FlashArray Connection With PowerShell”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”

Generating a Pure1 REST JWT with Python

I’ve written about generating the JSON Web Token for Pure1 REST API authentication before. Mostly around PowerShell. Though of course many may not want to use PowerShell and prefer to opt for something like Python.

So here is the process.

We have a script posted on the support site here. But that actually doesn’t return the JWT, it creates a session. So it takes the next step after the JWT. But if you just want to generate the JWT so something else can authenticate it won’t do the trick. So I made some modifications and threw it on GitHub as a gist. You can get it here:

https://gist.github.com/codyhosterman/697ebfd72c4f7f7276afc3b74e3b5e40

First off let’s review how to actually authenticate:

  1. Create a private/public key pair
  2. Enter the public key into Pure1
  3. Take the provided application ID and generate a JSON web token
  4. Send the JSON web token to Pure1 for an access token

I will walk through step 1-3. Using Python on Linux to generate the JWT.

Continue reading “Generating a Pure1 REST JWT with Python”

Persisting a Pure1 Certificate created by PowerShell

In a previous post I talk about how to easily create the private/public key needed for Pure1:

So when I create a certificate in PowerShell I store the reference in an object, in the below case the object is called $cert:

But if I close that PowerShell window the object is removed. What happens to that certificate? What if I want to re-use it? Good question.

Continue reading “Persisting a Pure1 Certificate created by PowerShell”

Troubleshooting a Pure1 Connection with the vSphere Plugin

In the 4.2.0 release of the vSphere Plugin, we added Pure1 integration which provided additional insight into your Pure Storage and vSphere Environment. In order to use this though, you need to connect the plugin with Pure1 of course. The authentication method is based on a process which involves something called a JSON Web Token. This is a secure option, but a bit more involved than a user name and password. I made the process of generating this fairly easy, but if something goes wrong you get a fun error message like below:

Continue reading “Troubleshooting a Pure1 Connection with the vSphere Plugin”

Deploying the Pure Storage OVA to vCenter 6.0

I will start this off with the usual rant. Why are you on vCenter 6.0?! It is going End of Support March 12th, 2020! https://kb.vmware.com/s/article/66977

But you likely know that and probably have your reasons and you didn’t come here for a lecture, you came here for an answer! Can I deploy the OVA to vCenter 6.0? Is it supported?

Let’s first clarify a few things. There are two different things when it comes to vCenter support with the collector. What it can be deployed TO and what it can collect FROM.

Let’s start off with what versions of vCenter it can collect from. We support collecting back to vCenter 5.5. The ESXi host versions that we support for collection in that vCenter lines up with whatever versions of ESXi that that particular vCenter supports. We support collection from versions up to the latest release of vCenter at the time of this writing–vCenter 6.7 U3. So collection support is from 5.5-6.7.x. As new vSphere releases come out we will add those at that time.

Continue reading “Deploying the Pure Storage OVA to vCenter 6.0”

Automating the Setup and Configuration of the Pure Storage OVA with PowerShell

A few weeks back we introduced the Pure Storage OVA which currently now focuses on the VM Analytics Collector and I blogged about deploying it with PowerShell:

https://www.codyhosterman.com/2019/10/deploying-the-pure1-vm-analytics-collector-ova-with-powershell/

This was only deploying the OVA, not configuring it. Once deployed, you need to (or might want to):

  • Change the default password
  • Add vCenters
  • Remove vCenters
  • Import configuration
  • Test phone home

Wouldn’t it be nice to do that all from PowerShell instead of SSH or the VM console? Of course it would! So I got to work on it! I have now updated my cmdlet Deploy-PfaAppliance to be able to reset the default password upon deployment and added a new cmdlet called Get-PfaAppliance to retrieve appliances and then configure them.

Continue reading “Automating the Setup and Configuration of the Pure Storage OVA with PowerShell”

Pure Storage Plugin 4.2.0 for the HTML-5 vSphere Client

Another quarter, another vSphere Plugin release from Pure! This is the release I have been really looking forward to as it sets the stage for a lot of the future work I want to build into the plugin. To recap:

  • 4.0.0 was our initial release of our plugin that only had the basic configuration support and VMFS management.
  • 4.1.0 was the 2nd release that added vVol support back into the plugin.
  • 4.2.0 enhances the plugin to add more vVol stuff into it as well as Pure1 Integration! So we are finally to the point where we are adding features into it that were never in the previous flash plugin. Yay!

So what are the new features?

  • Pure1 authentication
  • FlashArray fleet registration
  • Load meter integration
  • Pure1 tag integration
  • Intelligent provisioning
  • Full VM-undelete
Continue reading “Pure Storage Plugin 4.2.0 for the HTML-5 vSphere Client”