It’s that time of the year again…again! Pure Storage is back at VMworld in Barcelona.
Before we get into what’s happening at Barcelona, let’s recap a bit what happened in the US conference. As the best way to look to the future sometimes is to analyze the past.
Also check out this panel session I did with Rubrik on VMworld US.
VMware Cloud on AWS was once again a major topic–this is increasingly getting more attention and something we are paying a close eye on. The most important step around this for Pure is our new offering that is now fully GA, called Cloud Block Store. Our FlashArray software (Purity) now fully running AWS. See my posts here and here on that.
VMware Cloud Foundations. This is the basis for pretty much all automated VMware stacks–SDDC manager allows you to deploy vCenter, NSX, vRealize (etc. etc.) and of course their lifecycles. SDDC Manager (the management point of vCF) provides the ability to create on “management” domain–this is where all of the VMware services are deployed, and also one or more “workload” domains. Workloads domains are basically a new vCenter server–which gets hooked in via ELM. When deploying a workload domain the storage options used to be only be vSAN or NFS. You could then add block after the fact. In the 126.96.36.199 release, you can now choose Fibre Channel storage as the option. Check out our KB here on it. I expect to hear more about this in Barcelona.
Containers, K8s, and more containers. VMware’s work since the Heptio acquisition has not slowed down. I would be fairly comfortable saying that the announcement of Project Pacific and Project Tanzu were the talk of the town during and certainly after VMworld. I have no doubt this will bubble up more in Barcelona. The use case around First Class Disks and vVols I think is particularly intriguing.
vRealize Automation Cloud and vRealize 8.0. vRealize Automation 8.0 is now GA. There are two major things here to unpack. First VMware Cloud Automation Services was renamed to vRealize Automation Cloud. This in and of itself doesn’t mean anything (VMware loves to name things) but what is actually important about that is the “traditional” vRealize Automation set. vRealize Automation 8 is now entirely based on the features/design/architecture of vRealize Automation Cloud. Meaning that what vRAC offers is what vRA on-premises offers (same tools, integrations, features). This makes choosing between the two easier (one question to ask, do I want to host it, or do I want VMware to?). I expect details on this to be expanded in Barcelona.
vVols. Did you think I wouldn’t bring this up?! Of course I would. vVols is coming back in a big way, and in no small part due to VMware’s renewed push on vVols in their products and with their partners. The automation, integration, and benefits of vVols are making more and more sense these days. VMware gets that, and so do the storage partners. A major topic around vVols is Site Recovery Manager support. Expect to see more vendors talking about that as they furiously work on vVol replication support.
A few months back I published a new PowerShell cmdlet for installing the Pure Storage vSphere Client Plugin. Get-PfavSpherePlugin and Install-PfavSpherePlugin. This works quite well and we’ve had a fair amount of use of it so far, but another place we are certainly investing in right now is vRealize Orchestrator and continuing to enhance our plugin. Filling in any gaps around workflows and actions, especially on initializing an environment is important.
One of those gaps was installing the vSphere Plugin. One common use case we have seen around vRO is day 0 config, but day 2 stuff is done in vCenter (deploying VMs, datastores, etc). So the vSphere Plugin comes in handy here. So how do I install it from vRO?
This doesn’t come up very often these days, but every once and awhile it does and every time it does, I look to see if we have documentation on it and there never seems to be. After writing this post I did find a forum post where my friend Drew answers it there too. Well anyways let’s quickly explain the situation.
Most block vendors these days tell customers to change their path switching policy for their storage in ESXi from the default of Round Robin (1,000) to 1. This makes ESXi switches logical paths for a given device after every I/O instead of every 1,000. The reason I say this doesn’t come up much anymore is that in modern version of ESXi (6.0 express patch+, 6.5 U1+ and 6.7+) we (Pure) have rules in ESXi that makes sure this is set by default without any user configuration. Many other vendors do as well.
Anyways, when using VMware tools to see if a device is configured properly, depending on how it is set, it can readout differently.
Not long ago I posted about our initial release of our vSphere Plugin that supports the HTML-5 UI–the main problem though is that it did not yet support the VVol stuff we put in the original flash/flex based plugin.
So accordingly, the most common question I received was “when are you adding VVol support to this one?”. And my response was “Soon! We are working on it”.
I recently saw a post on Reddit about pulling a VM storage policy from a VM using vRO and it was stated that it was not possible which was said to be confirmed by VMware support.
‘Now I don’t know when they asked VMware support, and if it was two years or so ago, then that was true. But it is certainly not true now. Though I will admit, it is not super intuitive to figure out unless you know where to look. Here is how you do it.
Btw, I only tested this with VVol storage policies, but it really should not matter at all.
A VVol datastore, is not a file system, so it is not a traditional datastore. It is just a capacity quota. So when you “mount” a VVol datastore, you aren’t really performing a traditional mounting operation as there is no underlying physical storage to address during the mount. So instead of mounting some storage device, you are mounting what is called a storage container. This is the meta data object that represents the certain amount of capacity that can be provisioned from a given array. An array can have more than one storage containers, for reasons of multi-tenancy or whatever.
In a VMFS world, when you go to create a new datastore, you pass it the serial number of the storage you want to format with VMFS. You know that serial, because, well, you created the storage device. When you “mount” a VVol datastore, instead of a device serial, you supply the storage container UUID. It comes in the form of vvol:e0ad83893ead3681-b1b7f56a45ff64f1. Of course the characters will vary a bit.
I’ve been making a lot of updates to my PowerShell module around VVols recently and this was the last “table stakes” cmdlet I wanted to add. There are certainly more to come, but now we definitely have the basics. In 188.8.131.52 release of the PowerShell module I added a cmdlet called Mount-PfaVvolDatastore.
As of today we support a single VVol datastore–though we are working on adding support for more than one.
Registering VASA providers is the first step in setting up VVols for a given vCenter, so automating this process is something that might be of interest to folks. We currently have this process in our vSphere Plugin, as well as in our vRO plugin, and of course you can do it manually. What about PowerShell? Well we have that too!
One of the major advantages we have seen with VVols is making a virtual disk a first class citizen on the array. We can restore, copy, replicate them (and their VMs) as storage objects were meant to be restored, copied, replicated etc.
Though one thing about virtual disks is that by default–they are not first class citizens in vSphere, VVols or otherwise. To create one, it has to be associated with a VM.
To retrieve one in PowerCLI (for example) get-harddisk requires a datastore or a VM to return a result: