Tag Archives: VVol

Generating the default VVol Storage Container ID

A VVol datastore, is not a file system, so it is not a traditional datastore. It is just a capacity quota. So when you “mount” a VVol datastore, you aren’t really performing a traditional mounting operation as there is no underlying physical storage to address during the mount. So instead of mounting some storage device, you are mounting what is called a storage container. This is the meta data object that represents the certain amount of capacity that can be provisioned from a given array. An array can have more than one storage containers, for reasons of multi-tenancy or whatever.

In a VMFS world, when you go to create a new datastore, you pass it the serial number of the storage you want to format with VMFS. You know that serial, because, well, you created the storage device. When you “mount” a VVol datastore, instead of a device serial, you supply the storage container UUID. It comes in the form of vvol:e0ad83893ead3681-b1b7f56a45ff64f1. Of course the characters will vary a bit.

Continue reading Generating the default VVol Storage Container ID

Mounting a VVol Datastore with PowerCLI

I’ve been making a lot of updates to my PowerShell module around VVols recently and this was the last “table stakes” cmdlet I wanted to add. There are certainly more to come, but now we definitely have the basics. In 1.2.2.1 release of the PowerShell module I added a cmdlet called Mount-PfaVvolDatastore.

As of today we support a single VVol datastore–though we are working on adding support for more than one.

Continue reading Mounting a VVol Datastore with PowerCLI

PowerCLI and VVols Part VII: Synchronizing a Replication Group

In this post, I will overview how to synchronize a VVol-based replication group with PowerCLI. See previous posts below for more context:

This post is somewhat specific to Pure Storage–the cmdlets of course are universal, but behaviors may not correlate to your storage array. So if you are using VVols on a non-Pure array, certainly consult your vendor.

Furthermore, this is certainly specific to PowerCLI when it comes to the commands. With that being said, the fundamentals on how this works with Pure is common for all orchestration tools, so you should be able to use this information for other tools. Though of course the cmds/syntax will be different.

Continue reading PowerCLI and VVols Part VII: Synchronizing a Replication Group

Updating a volume group name on the FlashArray for VVols

The FlashArray implementation of Virtual Volumes surfaces VMs on the FlashArray as standard volume groups. The volume group being named by the virtual machine name. Each VVol is then added and removed to the volume group as they are provisioned or deleted. These objects though are fairly flexible–we do not use the volume group as a unique identifier of the virtual machine–internally we use key/value tags for that.

The benefit of that design is that you can delete the volume groups, rename them, or add and remove other volumes to it. Giving you some flexibility to group related VMs or whatever your use case might be to move things around, without breaking our VVol implementation.

Continue reading Updating a volume group name on the FlashArray for VVols

PowerCLI and VVols Part V: Array Snapshots and VVols

Another post in my series on VVols and PowerCLI, for previous posts see these:

This post will be about managing one-off snapshots with VVols on the FlashArray with PowerCLI.

One of the still semi-valid reasons I have seen DBAs say “I dont want to virtualize because…” Is that they have simple snapshot/recovery scripts for their physical server that allows them to quickly restore DBs from snapshots. Doing this on VMFS requires A LOT of coordination with the VMware layer.

So they tell the VMware team–“okay I will virtualize but I want RDMs”. Well the VMware team says “well we’d rather die”

…and around in circles we go…

VVols provides the ability to provide this benefit (easy snapshot stuff) but still get the benefits of VMware stuff (vMotion, Storage vMotion, cloning, etc) without the downside of RDMs.

So let’s walk though that process.

Continue reading PowerCLI and VVols Part V: Array Snapshots and VVols

PowerCLI and VVols Part IV: Correlating a Windows NTFS to a VMDK

My last post in this series was about getting a VVol UUID and figuring out what volume on a FlashArray it is. But what about the step before that? If I have a guest OS file system how do I even figure out what VMDK it is?

There is a basic option, which can potentially be used, which is correlating the bus ID and the unit ID of the device in the guest and matching it to what VMware displays for the virtual disks.

But that always felt to me as somewhat inexact.  What if you accidentally look at the wrong VM object and then do something to a volume you do not mean to? Or the opposite?

Not ideal. Luckily there is a more exact approach. I will focus this particular post on Windows. I will look at Linux in an upcoming one.

Continue reading PowerCLI and VVols Part IV: Correlating a Windows NTFS to a VMDK

vSpeaking Podcast Appearance: VVols

A few week ago I had the chance to stop by VMware HQ and catch up with some good friends at VMware. The highlight of the visit was doing an episode of the vSpeaking Podcast alongside VMware VVol engineering manager Patrick Dirks hosted by Pete Flecha and John Nicholson.

Honored to be on the podcast for the third time, thanks again for having me on Pete and John!

Topic? VVols of course!

Continue reading vSpeaking Podcast Appearance: VVols

Getting Started with vRealize Orchestrator and VVols

Over the past few weeks, I have been working on writing a vRealize Orchestrator workflow package for Virtual Volumes and the FlashArray. While that is not quite ready to go out, I think some basics for starting to use vRO and VVols are worth noting.

There are three main parts of using VVols with vRO:

  1. Core vCenter SDK–this is what you use to create VMs, datastores, etc.
  2. SMS–this is the service that manages storage providers (VASA) and replication for VVols.
  3. PBM–this is the service that you use for storage policy based features.

Continue reading Getting Started with vRealize Orchestrator and VVols

VVol Data Mobility: Data from Virtual to Physical

One of the most strategic benefits of Virtual Volumes is how it opens up your data mobility. Because there is no more VMDK encapsulation, VVols are just block volumes with whatever file system your guest OS in the VM puts on it. So a VVol is really just a volume hosting NTFS, or XFS or whatever. So if a target can read that file system, it can use that VVol. It does not have to be a VMware VM.

Let me start out with: YES our VVols deployment will be GA VERY soon. I am sorry (but not really) for continuing to tease VVols here.

This is one of the reasons we do not treat VVols on the FlashArray any differently than any other volume–because they aren’t different! So there is no reason you can’t move the data around. So why block it??

Some possibilities this function opens us:

  1. Take a RDM and make it a VVol
  2. Take a VVol and present it to an older VMware environment as a RDM
  3. Take a VVol and present it, or a copy of it, to a physical server.
  4. On the FlashArray we are also introducing something called CloudSnap, which will let you take snapshots of volumes (aka VVols) and send them to NFS, or S3 to be brought up as a EBS volume for an EC2 instance.

Continue reading VVol Data Mobility: Data from Virtual to Physical

Moving from an RDM to a VVol

Migrating VMDKs or virtual mode RDMs to VVols is easy: Storage vMotion. No downtime, no pre-creating of volumes. Simple and fast. But physical mode RDMs are a bit different.

As we all begrudgingly admit there are still more than a few Raw Device Mappings out there in VMware environments. Two primary use cases:

  • Microsoft Clustering. Virtual disks can only be used for Failover Clustering if all of the VMs are on the same ESXi hosts which feels a bit like defeating the purpose. So most opt for RDMs so they can split the VMs up.
  • Physical to virtual. Sharing copies of data between physical and virtual or some other hypervisor is the most common reason I see these days. Mostly around database dev/test scenarios. The concept of a VMDK can keep your data from being easily shared, so RDMs provide a workaround.

Continue reading Moving from an RDM to a VVol