Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.
As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem. Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains.
First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table. Fortunately, it is very easy to distinguish between the two storage types:
Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager. Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below. We’ve shown how to use VMFS on FC previously.
Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed. Examples of this storage type today include iSCSI and the focus of this blog: vVols.
Not long ago I posted about our initial release of our vSphere Plugin that supports the HTML-5 UI–the main problem though is that it did not yet support the VVol stuff we put in the original flash/flex based plugin.
So accordingly, the most common question I received was “when are you adding VVol support to this one?”. And my response was “Soon! We are working on it”.
I recently saw a post on Reddit about pulling a VM storage policy from a VM using vRO and it was stated that it was not possible which was said to be confirmed by VMware support.
‘Now I don’t know when they asked VMware support, and if it was two years or so ago, then that was true. But it is certainly not true now. Though I will admit, it is not super intuitive to figure out unless you know where to look. Here is how you do it.
Btw, I only tested this with VVol storage policies, but it really should not matter at all.
A VVol datastore, is not a file system, so it is not a traditional datastore. It is just a capacity quota. So when you “mount” a VVol datastore, you aren’t really performing a traditional mounting operation as there is no underlying physical storage to address during the mount. So instead of mounting some storage device, you are mounting what is called a storage container. This is the meta data object that represents the certain amount of capacity that can be provisioned from a given array. An array can have more than one storage containers, for reasons of multi-tenancy or whatever.
In a VMFS world, when you go to create a new datastore, you pass it the serial number of the storage you want to format with VMFS. You know that serial, because, well, you created the storage device. When you “mount” a VVol datastore, instead of a device serial, you supply the storage container UUID. It comes in the form of vvol:e0ad83893ead3681-b1b7f56a45ff64f1. Of course the characters will vary a bit.
I’ve been making a lot of updates to my PowerShell module around VVols recently and this was the last “table stakes” cmdlet I wanted to add. There are certainly more to come, but now we definitely have the basics. In 126.96.36.199 release of the PowerShell module I added a cmdlet called Mount-PfaVvolDatastore.
As of today we support a single VVol datastore–though we are working on adding support for more than one.
Registering VASA providers is the first step in setting up VVols for a given vCenter, so automating this process is something that might be of interest to folks. We currently have this process in our vSphere Plugin, as well as in our vRO plugin, and of course you can do it manually. What about PowerShell? Well we have that too!
One of the major advantages we have seen with VVols is making a virtual disk a first class citizen on the array. We can restore, copy, replicate them (and their VMs) as storage objects were meant to be restored, copied, replicated etc.
Though one thing about virtual disks is that by default–they are not first class citizens in vSphere, VVols or otherwise. To create one, it has to be associated with a VM.
To retrieve one in PowerCLI (for example) get-harddisk requires a datastore or a VM to return a result:
This post is somewhat specific to Pure Storage–the cmdlets of course are universal, but behaviors may not correlate to your storage array. So if you are using VVols on a non-Pure array, certainly consult your vendor.
Furthermore, this is certainly specific to PowerCLI when it comes to the commands. With that being said, the fundamentals on how this works with Pure is common for all orchestration tools, so you should be able to use this information for other tools. Though of course the cmds/syntax will be different.
The FlashArray implementation of Virtual Volumes surfaces VMs on the FlashArray as standard volume groups. The volume group being named by the virtual machine name. Each VVol is then added and removed to the volume group as they are provisioned or deleted. These objects though are fairly flexible–we do not use the volume group as a unique identifier of the virtual machine–internally we use key/value tags for that.
The benefit of that design is that you can delete the volume groups, rename them, or add and remove other volumes to it. Giving you some flexibility to group related VMs or whatever your use case might be to move things around, without breaking our VVol implementation.