A few months back I was reviewing our VMware training for our field (and after some direct feedback) realized it wasn’t really doing what our field needed. It was too nuts and bolts technical–which isn’t really what was needed by the masses. There was more of a desire to understand the value of the VMware product, the value of the integration and the value that we as Pure can bring to it.
The ones that wanted/needed more technical training could get that as needed.
In short, what they wanted to be able to do was have the “I’m staffing a booth at a conference and someone asks me about vRealize Orchestrator”. Not being an expert in the product, how to do I quickly understand the value, so I know if I am chasing the right product/solution and I should inquire further.
I am working on my PowerShell module for Pure/VMware operations and one of the cmdlets I am writing is for growing a VMFS. When perusing the internet, I could not find a lot of direct information on how to actually do this. There is not a default cmdlet for doing this.
The illustrious Luc Dekens talks about this problem here and even provides a great module for doing this:
If you just need want to run a quick script you can use that. If you want to write it yourself here is a quick overview of what you need to do. I am talking about a specific use case of:
I have a datastore on one extent and that extent exists on a LUN (or device or volume or whatever you want to call it) on an array. That LUN has been grown on the array.
I want to grow the VMFS to use the new capacity and not create a new extent, just grow it.
My last post in this series was about getting a VVol UUID and figuring out what volume on a FlashArray it is. But what about the step before that? If I have a guest OS file system how do I even figure out what VMDK it is?
There is a basic option, which can potentially be used, which is correlating the bus ID and the unit ID of the device in the guest and matching it to what VMware displays for the virtual disks.
But that always felt to me as somewhat inexact. What if you accidentally look at the wrong VM object and then do something to a volume you do not mean to? Or the opposite?
Not ideal. Luckily there is a more exact approach. I will focus this particular post on Windows. I will look at Linux in an upcoming one.
One of the first technical benefits users can enjoy around VVols is the use of snapshotting. Snapshots created through VMware of VMs have always been a point of contention which as severely limited their usability (see a post I did around the performance impact of them here).
With VVols, when you right-click on a VM and choose take snapshot, VMware does not create the performance-impacting delta VMDK files that were traditionally used, but instead VMware entirely offloads this process to the array. So the array creates the snapshots and VMware just tracks them.
But since VMs are now a collection of individual volumes on the array (a VVol is just an array volume) you can also snapshot and restore individual virtual disks as well directly on the array.
About four years ago, we (Pure Storage) released support for our asynchronous replication and Site Recovery Manager by releasing our storage replication adapter. In late 2017, we released our support for active-active synchronous replication called ActiveCluster.
Until SRM 6.1, SRM only supported active-passive replication, so a test failover or a failover would take a copy of the source VMFS (or RDM) on the target array and present it, rescan the ESXi environment, resignature the datastore(s) then register and power-on the VMs in accordance to the SRM recovery plan.
The downside to this of course is that the failover is disruptive–even if there was not actually a disaster that was the impetus for the failover. But this is the nature of active-passive replication.
In short, VMware only supported one mechanism of LUN ID addressing which is called “peripheral”. A different mechanism is generally encouraged by the SAM called “flat” especially for larger LUN IDs (like 256 and above). If a storage array used flat addressing, then ESXi would not see LUNs from that target. This is often why ESXi could not see LUN IDs greater than 255, as arrays would use flat addressing for LUN IDs that number or higher.
VVols have been gaining quite a bit of traction of late, which has been great to see. I truly believe it solves a lot of problems that were traditionally faced in VMware environments and infrastructures in general. With that being said, as things get adopted at scale, a few people inevitably run into some problems setting it up.
The main issues have revolved around the fact that VVols are presented and configured in a different way then VMFS, so when someone runs into an issue, they often do not know exactly where to start.
The issues usually come down to one of the following places:
One of the great benefits of VVols is that fact that virtual disks are just volumes on your array. So this means if you want to do some data management with your virtual disks, you just need to work directly on the volume that corresponds to it.
The question is what virtual disk corresponds to what volume on what array?
Well some of that question is very array dependent (are you using Pure Storage or something else). But the first steps are always the same. Let’s start there for the good of the order.