The FlashArray implementation of Virtual Volumes surfaces VMs on the FlashArray as standard volume groups. The volume group being named by the virtual machine name. Each VVol is then added and removed to the volume group as they are provisioned or deleted. These objects though are fairly flexible–we do not use the volume group as a unique identifier of the virtual machine–internally we use key/value tags for that.
The benefit of that design is that you can delete the volume groups, rename them, or add and remove other volumes to it. Giving you some flexibility to group related VMs or whatever your use case might be to move things around, without breaking our VVol implementation.
Continue reading Updating a volume group name on the FlashArray for VVols
This post I will talk about using PowerCLI to run a test failover for VVol-based virtual machines. One of the many nice things about VVols is that in the VASA 3.0 API this process is largely automated for you. The SRM-like workflow of a test failover is included–so the amount of storage-related PowerShell you have to manually write is fairly minimal.
Continue reading PowerCLI and VVols Part VI: Running a Test Failover
I have released a new version of the VMware/Pure PowerShell module which can be automatically installed from the PowerShell Gallery.
Pure Storage PowerShell VMware Module
Updates in this release are focused on VVols. Creating VVol snapshots, copying them, creating new disks from them, retrieving them etc.
I wrote a blog post below on using some of the new cmdlets:
PowerCLI and VVols Part V: Array Snapshots and VVols
Continue reading 220.127.116.11 Release of the Pure Storage VMware PowerShell Module
Another post in my series on VVols and PowerCLI, for previous posts see these:
This post will be about managing one-off snapshots with VVols on the FlashArray with PowerCLI.
One of the still semi-valid reasons I have seen DBAs say “I dont want to virtualize because…” Is that they have simple snapshot/recovery scripts for their physical server that allows them to quickly restore DBs from snapshots. Doing this on VMFS requires A LOT of coordination with the VMware layer.
So they tell the VMware team–“okay I will virtualize but I want RDMs”. Well the VMware team says “well we’d rather die”
…and around in circles we go…
VVols provides the ability to provide this benefit (easy snapshot stuff) but still get the benefits of VMware stuff (vMotion, Storage vMotion, cloning, etc) without the downside of RDMs.
So let’s walk though that process.
Continue reading PowerCLI and VVols Part V: Array Snapshots and VVols
This is my first (but certainly not last post) on the new path selection policy option in vSphere 6.7 Update 1. In reality, this option was introduced in the initial release of 6.7, but it was not officially supported until update 1.
So what is it? Well first off, see the official words from my colleague Jason Massae at VMware here:
Why was this PSP option introduced? Well the most common path selection policy is the NMP Round Robin. This is VMware’s built-in path selection policy for arrays that offer multiple paths. Round Robin was a great way to leverage the full performance of your array by actively using all of the paths simultaneously. Well…almost simultaneously.
Continue reading Latency Round Robin PSP in ESXi 6.7 Update 1
I am working on my PowerShell module for Pure/VMware operations and one of the cmdlets I am writing is for growing a VMFS. When perusing the internet, I could not find a lot of direct information on how to actually do this. There is not a default cmdlet for doing this.
The illustrious Luc Dekens talks about this problem here and even provides a great module for doing this:
If you just need want to run a quick script you can use that. If you want to write it yourself here is a quick overview of what you need to do. I am talking about a specific use case of:
- I have a datastore on one extent and that extent exists on a LUN (or device or volume or whatever you want to call it) on an array. That LUN has been grown on the array.
- I want to grow the VMFS to use the new capacity and not create a new extent, just grow it.
Continue reading Growing a VMFS datastore with PowerCLI
My last post in this series was about getting a VVol UUID and figuring out what volume on a FlashArray it is. But what about the step before that? If I have a guest OS file system how do I even figure out what VMDK it is?
There is a basic option, which can potentially be used, which is correlating the bus ID and the unit ID of the device in the guest and matching it to what VMware displays for the virtual disks.
But that always felt to me as somewhat inexact. What if you accidentally look at the wrong VM object and then do something to a volume you do not mean to? Or the opposite?
Not ideal. Luckily there is a more exact approach. I will focus this particular post on Windows. I will look at Linux in an upcoming one.
Continue reading PowerCLI and VVols Part IV: Correlating a Windows NTFS to a VMDK
One of the great benefits of VVols is that fact that virtual disks are just volumes on your array. So this means if you want to do some data management with your virtual disks, you just need to work directly on the volume that corresponds to it.
The question is what virtual disk corresponds to what volume on what array?
Well some of that question is very array dependent (are you using Pure Storage or something else). But the first steps are always the same. Let’s start there for the good of the order.
Continue reading PowerCLI and VVols Part II: Finding VVol UUIDs
There are a variety of ways to assign and set a SPBM Policy to a VM. I recently put out a workflow package for vRO to everything VVols and Pure:
vRealize Orchestrator VVol Workflow Package
I also specifically blogged about assigning a policy to a VM with vRO:
Assigning a VVol VM Storage Policy with vRO
How do you do this with PowerCLI?
Continue reading PowerCLI and VVols Part I: Assigning a SPBM Policy
A new ESXi 6.5 patch came out today:
And I wanted to upgrade my whole lab environment to it and I haven’t set up auto-deploy or update manager yet (I plan to, making all of this much easier to manage). So I wrote a quick and dirty PowerCLI script that updates to the latest patch and if the host doesn’t have any VMs on it, puts it into maintenance mode and reboots it. I will reboot the other ones as needed.
So short, not really even worth throwing on GitHub, but I might make it cleaner, and smarter at some point and put it there. Continue reading Upgrading ESXi environment with PowerCLI