This is certainly a post that has been a long time in coming. As many customers were probably aware we only supported one vVol datastore per FlashArray from the inception of our support. Unlike VMFS, this doesn’t hinder as much as one might think: they datastore can be huge (up to 8 PB), features are granular to the vVol (virtual disk), and a lot of the adoption was driven by the VMware team who didn’t often really need multiple datastores.
Sure.
Before you start arguing, of course there are reasons for this and is something we needed to do. But as with our overall design of how we implement vVols on FlashArray (and well any feature) we wanted to think through our approach and how it might affect later development. We quickly came to the conclusion that leveraging pods as storage containers made the most sense. They act as similar concept as a vVol datastore does–provide feature control, a namespace, capacity tracking, etc. And more as we continue to develop them. Purposing these constructs on the array makes array management simpler: less custom objects, less repeated work, etc.
This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.
Why would you want to do it one way or another?
Pros of live migration:
No VM downtime
Simpler configuration changes and overlap. Less to go wrong or mess up
Pros of powering off VMs:
The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF
Great, you’ve decided on a live migration for your VMs because you don’t care about how long it takes; you just want to minimize downtime of your VMs as much as possible. If you haven’t already, you’ll need to follow the guides Pure Storage has for setting up NVMe-oF in your environment.
Once you’ve configured NVMe-oF in your environment, you’ll need to create the namespace (volume), connect it to the appropriate host group, create the NVMe-oF datastore in vCenter and finally storage vMotion your VMs from the SCSI datastore to the NVMe datastore.
Create the Volume
From a FlashArray perspective, this is identical to SCSI except for the slightly different terms and labels. Cody wrote a nice article explaining the differences. Log into your FlashArray, select (1) Storage then (2) Volumes then click the (3) + on the right hand side of the GUI.
In the window that pops up, populate a (1) Name for the namespace (volume), give it a (2) Provisioned Size then click (3) Create.
Note the volume serial number by going to (1) Storage then (2) Volumes, finding the name of your (3) Volume, then (4) clicking on the hyperlink name of it.
On the next window, note the Serial of the volume. We will use this later in vCenter to validate that we are connecting the right namespace.
Connect The Volume To the Appropriate Host Group
Still in the FlashArray GUI, go back to (1) Storage, select (2) Hosts, then select the (3) Host Group you have created for your NVMe-oF hosts. In this case, I am setting this up for NVMe-FC but the steps will be the same for NVMe-RoCE after you have followed the previously linked KB articles.
Next, click the three vertical dots (I think this is called a hamburger) and select Connect.
For the last step in the FlashArray GUI, select the (1) Namespace (volume) you created before then click (2) Connect.
Create The NVMe-oF Datastore
Switching over to vCenter, we’ll first want to create a datastore from the namespace that we’ve just presented to our host group. This process is easier than with SCSI datastores because you do not have to rescan the storage adapters- all you need to do is create a datastore on top of the NVMe namespace that is already present.
(1) Right click on the vSphere cluster you’ve presented the namespace to, hover over (2) Storage, then click (3) New Datastore.
Specify a (1) Name for your datastore, (2) Select a host that the namespace was presented to, select the (3) namespace from the list and click (4) Next. Validate the serial number of the namespace (volume) from the FlashArray GUI before in the Name column.
Validate the hosts are connected to your newly created NVMe-oF datastore by going to the (1) Storage tab, selecting the (2) Datastore Name and clicking on the (3) Hosts tab. If anything looks incorrect here (not all hosts from the cluster are connected, etc), please review your NVMe-oF configuration for issues.
Storage vMotion the VMs from SCSI-backed Datastore(s) to NVMe-backed Datastore(s)
Staying in the vCenter GUI, select the (1) Hosts and Clusters tab, right click on the (2) VM you want to migrate from SCSI to NVMe then select (3) Migrate… from the list that pops up.
Select (1) Change storage only from the window that pops up and click (2) Next.
Select the (1) NVMe datastore you created before then click (2) Next. Optionally you can modify the storage policies for the VM and the virtual disk format.
Finally, verify the details of the migration and click (1) Finish.
And now wait until the VM has migrated to the NVMe-oF datastore. Migrations in general can be very daunting, but luckily with NVMe-oF, it can be extremely simple. Hopefully you found this helpful.
We are excited to announce the launch of the latest version of Pure Storage’s remote vSphere plugin, 5.1.0. It includes a number of bug fixes PLUS a highly sought after feature: vVols VM point-in-time (PiT) recovery!
Why am I excited about this feature?
With vVol PiT VM recovery, you can now easily recover an entire VM that was accidentally deleted (and eradicated) or you can restore the state of a VM back to a point in time that you took a snapshot from vCenter directly while using Pure’s vSphere plugin.
The requirements of this are Pure’s vSphere remote plugin 5.1.0 and Purity™ 6.2.6 or higher for PiT revert and for PiT VM undelete with a vVol VM that has had its FlashArray™ volumes eradicated from the FlashArray itself. If you’re undeleting a vVol VM that has not been eradicated yet, that functionality is present for Purity versions 6.1 and lower.
For PiT VM revert, you will also need to make sure that you have snapshots of all of the volumes associated with the vVol VM except swap- at least one data volume and one configuration volume.
For VM undelete before the volumes have been eradicated, you will need a snapshot of the vVol VM’s configuration volume.
For VM undelete after the vVol-backed VM has been eradicated, you’ll need a FlashArray protection group snapshot of all the VM’s data volumes, managed snapshots and configuration volumes.
Rather than rehash what my teammate Alex Carver has put a lot of work into, I’m just going to link to the KB and videos he created:
Download the new plugin (part of Pure’s OVA), read the release notes and test out vVol PiT recovery today! Like a lot of things, it’s better to have some understanding of what’s happening and why before needing something that might be part of your recovery process. Please note that you can also upgrade in-place from 5.0.0 to 5.1.0 (and future remote plugin releases) by following this guide.
Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.
VMware Tanzu is a game-changing piece of technology for numerous reasons, but probably the most transformational piece of it is also the most apparent – it provides the capability for the vCenter admin to give resources for both consumers of traditional virtual machines as well as Kubernetes/DevOps users from the same set of compute hosts and storage. This consolidation means that the vCenter admin can more easily see what is being allocated where, as well as gaining insight into what application(s) might be candidates to make the move into a container-based environment from a virtual machine.
A Tanzu deployment is comprised of quite a few moving pieces and a central piece of this is durable storage made possible by persistent volumes. While container nodes and pods are ephemeral by nature (which is one of their major advantages), the data that they consume, produce and manipulate must be performant, portable and often, saved. So, there is obviously a different set of things we care about for persistent data vs the Kubernetes nodes that Tanzu runs in unison with here. For the remainder of this post we will show a couple of quick and easy ways you can change your persistent volumes to suit your application needs. There’s a bit of work and some choices to be made around getting a Tanzu environment up and running in vSphere, and I’d encourage you to check out the VMware Tanzu User Guide on our Pure Storage support site or Cody’s blog series to get some additional information.
With that being said, when a persistent volume is created via either dynamic or static provisioning, one of the first things the application developer needs to decide is what will happen to that volume and data when the application that uses it itself is no longer needed. The default behavior for an SPBM policy/storageclass assigned to a vSphere Namespace is to delete it, but through a simple kubectl patch command line, the persistent volume can be saved for future usage.
To make this change, first get the persistent volume name that you want to Retain/save:
$ kubectl get pv
NAME CAPACITY RECLAIM POLICY STATUS
pvc-f37c39fd 5Gi Delete Bound
Next, apply this kubectl command line to it to switch the reclaim policy from Delete to Retain:
When we run the kubectl get pv command again, we can see it is set to Retain, so we are all set:
$ kubectl get pv
NAME CAPACITY RECLAIM POLICY STATUS
pvc-f37c39fd 5Gi Retain Bound
If there is anything close to a certainty in the storage world – it is that the longer a volume exists, the more full of data it will become. This becomes even more of a certainty if a persistent volume is retained and reused across multiple application instances for increasing amounts of time. In the vSphere and Supervisor cluster 7.0U2 release VMware has introduced the capability for Online Volume Expansion. What this means is that while in previous versions users had to unbind their persistent volume claim from a pod or node prior to resizing it (otherwise known as offline volume expansion) – now they are able to accomplish that same operation without that step . This is a huge advantage as the offline expansion required that the volume be effectively be taken out of service when additional space was added to it, which could lead to application downtime. With the online volume expansion enhancement that annoyance goes away completely.
Online volume expansion operation is really simple to do. This time we find the persistent volume claim (which is basically the glue between the persistent volume and the application) that we need to expand:
$ kubectl get pvcNAME STATUS VOLUME CAPACITY
pvc-vvols-mysql Bound pvc-f37c39fd 5Gi
Now we run the following patch command against the PVC name we found above so that it knows to request additional storage for the persistent volume that it is bound to. In this case, we will ask to expand from 5Gi to 6Gi:
After waiting for a few moments for the expansion to complete, we look at the pvc in order to confirm we have the additional space that we asked for and we can see it has been added:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY
pvc-vvols-mysql Bound pvc-f37c39fd 6Gi
Taking a closer look at the PVC via the describe command shows that it indeed increased the PV size while it remained mounted to the mysql-deployment node under the events section:
$ kubectl describe pvc Name: pvc-vvols-mysql Namespace: default StorageClass: cns-vvols Status: Bound Volume: pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com volumehealth.storage.kubernetes.io/health: accessible Finalizers: [kubernetes.io/pvc-protection] Capacity: 6Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: mysql-deployment-5d8574cb78-xhhq5 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ExternalExpanding 52s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Normal Resizing 52s external-resizer csi.vsphere.vmware.com External resizer is resizing volume pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c Normal FileSystemResizeRequired 51s external-resizer csi.vsphere.vmware.com Require file system resize of volume on node Normal FileSystemResizeSuccessful 40s kubelet, tkc-120-workers-mbws2-68d7869b97-sdkgh MountVolume.NodeExpandVolume succeeded for volume "pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c"
Those are just a couple of the ways we can update our persistent volumes to do what we need them to do within a Tanzu deployment, and we have really just scratched the surface with these few examples. To see how to do more advanced operations like migrating a persistent volume to a different Tanzu Kubernetes Cluster, please head over to our new Tanzu User Guide. Of course, it also is very important to mention that Portworx combined with Tanzu gives us even more features and functionalities like RBAC, automated backup and recovery and a whole lot more. Getting deeper into how Portworx interoperates with Tanzu is what I’m working on next so please stay tuned for some more cool stuff.
Happy New Year everyone! Let’s work to make 2021 a better year.
In furtherance of that goal, I have put out a few new vVol-related PowerShell cmdlets! So baby steps I guess.
The following are the new cmdlets:
Basics:
Get-PfaVvolStorageArray
Replication:
Get-PfaVvolReplicationGroup
Get-PfaVvolReplicationGroupPartner
Get-PfaVvolFaultDomain
Storage Policy Management:
Build-PfaVvolStoragePolicyConfig
Edit-PfaVvolStoragePolicy
Get-PfaVvolStoragePolicy
New-PfaVvolStoragePolicy
Set-PfaVvolVmStoragePolicy
Now to walk through how to use them. This post will talk about the basics and the replication cmdlets. The next post will talk about the profile cmdlets.
Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.
In VMware Cloud Foundation (VCF) version 4.1, vVols have taken center stage as a Principal Storage type available for Workload Domain deployments. This inclusion in one of VMware’s premier products reinforces the continued emphasis on vVols and all the benefits that they enable from VMware. vVols with iSCSI is particularly exciting to us as this is the first instance of the iSCSI protocol being supported as a Principal Storage type within VCF. We at Pure Storage are honored to have had a little bit of influence over this added functionality by serving as a design partner for this new feature and we are confident you are going to like what you see!
Someone who is using VMFS datastore with VCF today might ask themselves ‘why vVols’? This is a great question deserving of an expansive answer beyond this blog post. Fundamentally, though, using vVols enables you to fully use the FlashArray in the way it was intended. By leverage VASA (VMware API for Storage Awareness) you gain far more granular control and monitoring abilities over your individual VMs. Native FlashArray capabilities such as snapshots and replication are directly executed against the underlying array via policy-driven constructs. Further information on these and other benefits with vVols are available here.
Using vVols as Principal Storage is a lot like the methods VCF customers are used to for pre-existing Principal Storage options. Image an ESXi host, apply a few prerequisites to it, commission it to SDDC manager and create Workload Domains. Deploying Workload Domains with VMware Cloud Foundation automates and takes all the guesswork out of deploying vCenter and NSX-T for modern use cases such as Kubernetes via Workload Management.
Stepping into some specifics for a moment; here’s the process on how to use FlashArray iSCSI and vVols for VCF Workload Domains:
The most fundamental update to SDDC Manager to allow vVols is the capability to register a VASA Provider. In the below screenshot and following detailed information, we show an example of how you can add a FlashArray using another block protocol: Fibre Channel:
Provide a descriptive name for the VASA provider. It is recommended to use the FlashArray name and append it with -ct0 or -ct1 to denote which controller the entry is associated with.
Provide the URL for the VASA provider. This cannot be the management VIP of the array. Instead this field needs to be the management IP address associated with one of the controllers. The URL also is required to have the VASA port and version.xml appended to it. The format for the URL is: https://<IP of FlashArrayController>:8084/version.xml
Give a FlashArray user name with the arrayadmin role. The procedure for how to create such a user can be found here. While the pureuser account can be used, we recommend creating and using a separate FlashArray user for VASA operations.
Provide the password for the FlashArray username to be used.
Container Name must be Vvol container. Note that this value is case-sensitive.
For Container Type, select FC from the drop-down menu to use Fibre Channel.
Once all entries are completed, click Save.
Obviously, there’s a lot more to share here so we will be expanding on this substantially in the very near future on our VMware Platform Guide site.
Rounding out this post, I’m happy to show a demo video of just how easy it is to deploy a FC+vVols-based Workload Domain with VMware Cloud Foundation.
Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.
As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem. Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains.
First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table. Fortunately, it is very easy to distinguish between the two storage types:
Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager. Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below. We’ve shown how to use VMFS on FC previously.
Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed. Examples of this storage type today include iSCSI and the focus of this blog: vVols.
Not long ago I posted about our initial release of our vSphere Plugin that supports the HTML-5 UI–the main problem though is that it did not yet support the VVol stuff we put in the original flash/flex based plugin.
So accordingly, the most common question I received was “when are you adding VVol support to this one?”. And my response was “Soon! We are working on it”.
I recently saw a post on Reddit about pulling a VM storage policy from a VM using vRO and it was stated that it was not possible which was said to be confirmed by VMware support.
‘Now I don’t know when they asked VMware support, and if it was two years or so ago, then that was true. But it is certainly not true now. Though I will admit, it is not super intuitive to figure out unless you know where to look. Here is how you do it.
Btw, I only tested this with VVol storage policies, but it really should not matter at all.
A VVol datastore, is not a file system, so it is not a traditional datastore. It is just a capacity quota. So when you “mount” a VVol datastore, you aren’t really performing a traditional mounting operation as there is no underlying physical storage to address during the mount. So instead of mounting some storage device, you are mounting what is called a storage container. This is the meta data object that represents the certain amount of capacity that can be provisioned from a given array. An array can have more than one storage containers, for reasons of multi-tenancy or whatever.
In a VMFS world, when you go to create a new datastore, you pass it the serial number of the storage you want to format with VMFS. You know that serial, because, well, you created the storage device. When you “mount” a VVol datastore, instead of a device serial, you supply the storage container UUID. It comes in the form of vvol:e0ad83893ead3681-b1b7f56a45ff64f1. Of course the characters will vary a bit.