Refreshing a VM configuration from a vVol Snapshot

A lot of the time when we talk about vVols and snapshots we talk about restoring the virtual disks (the data vVols). This of course is a huge benefit of vVols–the virtual disks are 1:1 to a volume on the array so the snapshots (and other array features) can be used at a level of virtual disk. Need to restore a database on virtual disk B (the E:\ drive or whatever), just use the snapshot restore to instantly refresh the entire disk. No need to mount a copied datastore, resignature, remove the old disk etc. etc. Just copy from the snapshot to the vVol volume and re-mount the file system in the guest. Fast and easy.

VMware snapshots exist with vVols too–they create array-based copies. But when you restore from them, you restore the whole VM. And the existence of them complicate the VM configuration–extra pointers and files etc. So a common vVol option is to just temporarily use VMware snapshots for backup procedures or for one off protection of VMs while I run an upgrade etc and then delete it when it works.

What if I want to refresh the VM configuration from a snapshot? Keep the data on the disk as is, but refresh the VM config files (VMX mainly) from a snapshot?

This is possible from a VMFS but quite complex. For a vVol VM this is really simple. Process?

  1. Shutdown VM.
  2. Copy from snapshot to config volume.
  3. Reload VMX
  4. Power-on VM.

So for some background, in a vVol world the VM directory (which houses the VMX file, some logs, virtual disk pointers, and some other frivolities) looks like a folder. But in reality it is a logical pointer to a volume on the array. This volume is called a config vVol and each “directory” in the vVol datastore maps to one. This config vVol is actually a mini VMFS. See more details here

Since this is a volume, you can of course take snapshots of it. There are a few ways to do this, either create one off snapshots of it or through protection policies.

Continue reading “Refreshing a VM configuration from a vVol Snapshot”

Creating a File Repository on a vVol Datastore with PowerShell

I wrote about a new feature in vSphere 7.0 U2 here:

What’s New in vSphere 7.0 U2 Storage: Creating a File Repository on a vVol Datastore

This allows you to create a large config vVol in a vVol datastore to store large files. Use cases like ISO repositories etc.

This was at launch only available in the direct API, but as of PowerCLI 12.3 it is available in PowerCLI as well.

Pretty simple to do.

Login to vCenter:

connect-viserver -Server vcenter-70-siteb

Get the datastore manager object from the service instance of vCenter:

Continue reading “Creating a File Repository on a vVol Datastore with PowerShell”

Pure Storage and Equinix Metal-as-a-Service

A few weeks back Pure Storage and Equinix introduced a joint offering creatively entitled Pure Storage on Equinix Metal.

clipboard_edb7a66e0dd3d27e5896192b590b0ff48.png
clipboard_eb83934d1ea7a256de43cbecea5ad6519.png

You can view more details on the offering here.

Pure Storage Chart
https://metal.equinix.com/solutions/pure-storage/
Continue reading “Pure Storage and Equinix Metal-as-a-Service”

What’s New in vSphere 7.0 U2 Storage: Creating a File Repository on a vVol Datastore

This feature has many names. Creating a larger config vVol. Creating a sub-vVol datastore. Creating an ISO repository. Etc.

In 7.0U2, VMware added a new feature that supports creating a custom size config vVol–while this was technically possible in earlier releases, it was not supported. Also, I should note that this is not supported by all vVol vendors, so of course speak to your vendor first.

First to review what a config vVol is check out this post:

What is a Config VVol Anyways?

In short, it is a mini VMFS that gets created when you create a directory in a vVol datastore (most commonly created by creating a new VM). This defaults to 4 GB in size. Enough to store the general VM files; some logs, VMDK pointers, vmx file, and some other frivolities.

The issue though is that this was not large enough to store large things like ISOs or vib files or whatever. So if you tried to upload something to a vVol datastore folder it would fail with an out-of-space issue. And you cannot upload to the root of a vVol datastore because a vVol datastore is not a file system. So you had to use VMFS or NFS to store those objects.

This is no longer the case.

Continue reading “What’s New in vSphere 7.0 U2 Storage: Creating a File Repository on a vVol Datastore”

What’s New in vSphere 7.0 U2 Storage: Increased iSCSI Path Limit

A continuation of the series I started here.

This is a simple but important one. iSCSI path limits. Since the dawn of human, ESXi has had a disparity in path limits between iSCSI and Fibre Channel. 32 paths for FC and 8 (8!) paths for iSCSI.

Continue reading “What’s New in vSphere 7.0 U2 Storage: Increased iSCSI Path Limit”

What’s New in vSphere 7.0 U2 Storage: Multiple SPBM Configurations

The vSphere “update” releases are much more significant than they used to be–traditionally most of the new features came in the major releases. 6.5, 6.7, etc. vSphere 7.0U2 just released and there are quite a bit of storage-related features.

One of the features discussed here:

https://blogs.vmware.com/virtualblocks/2021/03/09/vsphere-7-u2-core-storage/

…is called “SPBM Multiple Snapshot Rule Enhancements“.

What does that mean?

Continue reading “What’s New in vSphere 7.0 U2 Storage: Multiple SPBM Configurations”

After 7 Years at Pure Storage, a New Role

In a few weeks, I will hit 7 years at Pure Storage. It has been REALLY fun. Helping to build our VMware integration ecosystem, pushing the vVol adoption/use case/efforts forward. Building what I believe to be a world-class solutions team that I manage.

I’ve hit my seven year itch. What is next? What should I tackle? Where can I make a big(ger) impact? What is something that is uncomfortable for me, that will allow me to grow?

Clearly, public cloud is a thing. Like, duh. Our customers see that, the industry sees that, and of course Pure Storage sees that. I think the potential there is really just starting–and I think tapping that potential is really going to accelerate efforts on-premises too. I think some of the work we are doing with Equinix Metal is proof of that.

I’ve focused on VMware, specifically VMware storage for my entire career. My first job out of university was just that. The VMware ecosystem though is not going away, and in fact is doing some really cool stuff too. Tanzu. VCF. VMware Cloud. vVols. NVMe-oF. A lot of exciting and differentiating work in that realm. One could easily remain there and perform super satisfying and impactful work. So continuing my focus there is definitely a great option.

Continue reading “After 7 Years at Pure Storage, a New Role”

How does Pure Storage integrate with vSAN?

I get this question quite a bit–in fact I got this question just today while discussing ways to grow the Pure Storage/VMware business. To folks who are close to the VMware storage ecosystem it might seem like an odd question, but it’s a good question!

VMware puts a lot of energy and time into vSAN. Lots of technical information, lots of marketing, lots of webinars, lots of great people discussing its ins and outs at VMware. So if vSAN is such a big storage thing at VMware, how does Pure support vSAN? What do we do with it? How can use deploy vSAN to use Pure Storage FlashArray storage?

Well let’s first take a look at what VMware really is offering with storage. VMware, at the highest level of storage is offering Storage Policy Based Management (SPBM). In a strong and steady movement away from caring about specific datastores, it is much more about the features and protections you want to apply to your VMs, or more specifically their disks. I want it replicated, I want it encrypted, I want it fast. Etc.

Continue reading “How does Pure Storage integrate with vSAN?”

What’s New in Purity 6.1: ESXi NVMe-oF Fibre Channel Boot from SAN

One of the initial limitations around NVMe-oF was the (in)ability to boot from SAN–though this is no longer the case. And you need some fairly new drivers across the board to do it. As far as I am aware, (as of the publication of this post) boot from SAN via NVMe is only currently supported via Fibre Channel, not RoCEv2. But I will keep an eye on that. You do need NVMe-oF/FC capable HBAs–a list of them can be found here:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&details=1&pFeatures=361&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc

I am using Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter in my server, so I will go through the Emulex instructions. The ESXi-side of things will be similar for other vendors, but the HBA driver version/configuration will be different.

First off, want to learn more about NVMe? Check out this from SNIA:

https://www.snia.org/educational-library?search=nvme&field_edu_content_type_tid=All&field_assoc_event_name_tid=All&field_release_date_value_2%5Bvalue%5D%5Byear%5D=&field_focus_areas_tid=All&field_author_tid=&field_author_company_value=&field_release_date_value=All&items_per_page=20

In order to boot from SAN via NVMe/FC you need a few other things:

  1. ESXi 7.0 Update 1 (or later)
  2. 12.8.x release (or later) of the lpfx and brcmnvmefc drivers from Broadcom/Emulex
  3. FlashArray//X or C withPurity 6.1 or later (Pure customers of course) and FC ports.
  4. Latest release of the HBA firmware
Continue reading “What’s New in Purity 6.1: ESXi NVMe-oF Fibre Channel Boot from SAN”

What’s New in Purity 6.1: NVMe-oF/Fibre Channel

NVMe. A continued march to rid ourselves of the vestigial SCSI standard. As I have said in the past, SCSI was designed for spinning disk–where performance and density are not friendly to one another. NVMe, however, was built for flash. The FlashArray was built for, well, flash. Shocking, I know.

Putting SCSI in front of flash, at any layer, constricts what performance density can be offered. It isn’t just about latency–but throughput/IOPS per GB. A spinning disk can get larger, but it really doesn’t get faster. Flash performance scales much better with capacity however. So larger flash drives don’t get slower per GB. But this really requires the HW and the SW to take advantage of it. SCSI has bottlenecks–queue limits that are low. NVMe has fantastically larger queues. It opens up the full performance, and specifically performance density of your flash, and in turn, your array. We added NVMe to our NVRAM, then our internal flash to the chassis, then NVMe-oF to our expansion shelves, then NVMe-oF to our front end from the host. The next step is to work with our partners to enable NVMe in their stack. We worked with VMware to release it in ESXi 7.0. More info on all of this in the following posts:

Continue reading “What’s New in Purity 6.1: NVMe-oF/Fibre Channel”