Extending vVols to VMware Cloud Foundation

Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem.  Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains. 

First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table.  Fortunately, it is very easy to distinguish between the two storage types:

Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager.  Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below.  We’ve shown how to use VMFS on FC previously.

Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed.  Examples of this storage type today include iSCSI and the focus of this blog:  vVols.

Continue reading “Extending vVols to VMware Cloud Foundation”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”

VMware Cloud Foundation and Pure Storage

A few weeks ago, VMware released the latest version of VMware Cloud Foundation, version 3.9. There were more than a few things in this release but one enhancement was around Fibre Channel:

https://docs.vmware.com/en/VMware-Cloud-Foundation/3.9/rn/VMware-Cloud-Foundation-39-Release-Notes.html

The release notes mention:

Fibre Channel Storage as Principal Storage: Virtual Infrastructure (VI) workload domains now support Fibre Channel as a principal storage option in addition to VMware vSAN and NFS.

vCF 3.9 Release Notes

So this leads to three questions:

  • What does this mean?
  • What does this mean for non-FC storage?
  • What was the stance around FC storage BEFORE this release?

Let’s answer the first question first.

Continue reading “VMware Cloud Foundation and Pure Storage”

VMworld Europe 2019 and Pure Storage

It’s that time of the year again…again! Pure Storage is back at VMworld in Barcelona.

Before we get into what’s happening at Barcelona, let’s recap a bit what happened in the US conference. As the best way to look to the future sometimes is to analyze the past.

Also check out this panel session I did with Rubrik on VMworld US.

  • VMware Cloud on AWS was once again a major topic–this is increasingly getting more attention and something we are paying a close eye on. The most important step around this for Pure is our new offering that is now fully GA, called Cloud Block Store. Our FlashArray software (Purity) now fully running AWS. See my posts here and here on that.
  • VMware Cloud Foundations. This is the basis for pretty much all automated VMware stacks–SDDC manager allows you to deploy vCenter, NSX, vRealize (etc. etc.) and of course their lifecycles. SDDC Manager (the management point of vCF) provides the ability to create on “management” domain–this is where all of the VMware services are deployed, and also one or more “workload” domains. Workloads domains are basically a new vCenter server–which gets hooked in via ELM. When deploying a workload domain the storage options used to be only be vSAN or NFS. You could then add block after the fact. In the 3.9.0.0 release, you can now choose Fibre Channel storage as the option. Check out our KB here on it. I expect to hear more about this in Barcelona.
  • Containers, K8s, and more containers. VMware’s work since the Heptio acquisition has not slowed down. I would be fairly comfortable saying that the announcement of Project Pacific and Project Tanzu were the talk of the town during and certainly after VMworld. I have no doubt this will bubble up more in Barcelona. The use case around First Class Disks and vVols I think is particularly intriguing.
  • vRealize Automation Cloud and vRealize 8.0. vRealize Automation 8.0 is now GA. There are two major things here to unpack. First VMware Cloud Automation Services was renamed to vRealize Automation Cloud. This in and of itself doesn’t mean anything (VMware loves to name things) but what is actually important about that is the “traditional” vRealize Automation set. vRealize Automation 8 is now entirely based on the features/design/architecture of vRealize Automation Cloud. Meaning that what vRAC offers is what vRA on-premises offers (same tools, integrations, features). This makes choosing between the two easier (one question to ask, do I want to host it, or do I want VMware to?). I expect details on this to be expanded in Barcelona.
  • vVols. Did you think I wouldn’t bring this up?! Of course I would. vVols is coming back in a big way, and in no small part due to VMware’s renewed push on vVols in their products and with their partners. The automation, integration, and benefits of vVols are making more and more sense these days. VMware gets that, and so do the storage partners. A major topic around vVols is Site Recovery Manager support. Expect to see more vendors talking about that as they furiously work on vVol replication support.
Continue reading “VMworld Europe 2019 and Pure Storage”

Installing the Pure Storage vSphere Plugin with vRealize Orchestrator

A few months back I published a new PowerShell cmdlet for installing the Pure Storage vSphere Client Plugin. Get-PfavSpherePlugin and Install-PfavSpherePlugin. This works quite well and we’ve had a fair amount of use of it so far, but another place we are certainly investing in right now is vRealize Orchestrator and continuing to enhance our plugin. Filling in any gaps around workflows and actions, especially on initializing an environment is important.

One of those gaps was installing the vSphere Plugin. One common use case we have seen around vRO is day 0 config, but day 2 stuff is done in vCenter (deploying VMs, datastores, etc). So the vSphere Plugin comes in handy here. So how do I install it from vRO?

Continue reading “Installing the Pure Storage vSphere Plugin with vRealize Orchestrator”

Demystifying IO Operation Readouts in ESXi

This doesn’t come up very often these days, but every once and awhile it does and every time it does, I look to see if we have documentation on it and there never seems to be. After writing this post I did find a forum post where my friend Drew answers it there too. Well anyways let’s quickly explain the situation.

Most block vendors these days tell customers to change their path switching policy for their storage in ESXi from the default of Round Robin (1,000) to 1. This makes ESXi switches logical paths for a given device after every I/O instead of every 1,000. The reason I say this doesn’t come up much anymore is that in modern version of ESXi (6.0 express patch+, 6.5 U1+ and 6.7+) we (Pure) have rules in ESXi that makes sure this is set by default without any user configuration. Many other vendors do as well.

Anyways, when using VMware tools to see if a device is configured properly, depending on how it is set, it can readout differently.

Continue reading “Demystifying IO Operation Readouts in ESXi”

FlashArray HTML-5 vSphere Client Plugin VVol Support

Not long ago I posted about our initial release of our vSphere Plugin that supports the HTML-5 UI–the main problem though is that it did not yet support the VVol stuff we put in the original flash/flex based plugin.

So accordingly, the most common question I received was “when are you adding VVol support to this one?”. And my response was “Soon! We are working on it”.

Continue reading “FlashArray HTML-5 vSphere Client Plugin VVol Support”

Retrieving Storage Policy of a VM with vRO

I recently saw a post on Reddit about pulling a VM storage policy from a VM using vRO and it was stated that it was not possible which was said to be confirmed by VMware support.

‘Now I don’t know when they asked VMware support, and if it was two years or so ago, then that was true. But it is certainly not true now. Though I will admit, it is not super intuitive to figure out unless you know where to look. Here is how you do it.

Btw, I only tested this with VVol storage policies, but it really should not matter at all.

Continue reading “Retrieving Storage Policy of a VM with vRO”

Generating the default VVol Storage Container ID

A VVol datastore, is not a file system, so it is not a traditional datastore. It is just a capacity quota. So when you “mount” a VVol datastore, you aren’t really performing a traditional mounting operation as there is no underlying physical storage to address during the mount. So instead of mounting some storage device, you are mounting what is called a storage container. This is the meta data object that represents the certain amount of capacity that can be provisioned from a given array. An array can have more than one storage containers, for reasons of multi-tenancy or whatever.

In a VMFS world, when you go to create a new datastore, you pass it the serial number of the storage you want to format with VMFS. You know that serial, because, well, you created the storage device. When you “mount” a VVol datastore, instead of a device serial, you supply the storage container UUID. It comes in the form of vvol:e0ad83893ead3681-b1b7f56a45ff64f1. Of course the characters will vary a bit.

Continue reading “Generating the default VVol Storage Container ID”

Mounting a VVol Datastore with PowerCLI

I’ve been making a lot of updates to my PowerShell module around VVols recently and this was the last “table stakes” cmdlet I wanted to add. There are certainly more to come, but now we definitely have the basics. In 1.2.2.1 release of the PowerShell module I added a cmdlet called Mount-PfaVvolDatastore.

As of today we support a single VVol datastore–though we are working on adding support for more than one.

Continue reading “Mounting a VVol Datastore with PowerCLI”