One of the most strategic benefits of Virtual Volumes is how it opens up your data mobility. Because there is no more VMDK encapsulation, VVols are just block volumes with whatever file system your guest OS in the VM puts on it. So a VVol is really just a volume hosting NTFS, or XFS or whatever. So if a target can read that file system, it can use that VVol. It does not have to be a VMware VM.
Let me start out with: YES our VVols deployment will be GA VERY soon. I am sorry (but not really) for continuing to tease VVols here.
This is one of the reasons we do not treat VVols on the FlashArray any differently than any other volume–because they aren’t different! So there is no reason you can’t move the data around. So why block it??
Some possibilities this function opens us:
- Take a RDM and make it a VVol
- Take a VVol and present it to an older VMware environment as a RDM
- Take a VVol and present it, or a copy of it, to a physical server.
- On the FlashArray we are also introducing something called CloudSnap, which will let you take snapshots of volumes (aka VVols) and send them to NFS, or S3 to be brought up as a EBS volume for an EC2 instance.
Continue reading VVol Data Mobility: Data from Virtual to Physical
I have blogged a decent amount recently about VVols and in many of those posts I mention config VVols. When using vSphere Virtual Volumes, VMs have one, some, or all of the following VVols types:
- Data VVols–every virtual disk you add creates a data VVol on your array
- Swap VVol–when you power on a VVol-based VM, a swap VVol is created. When you power it off, this is deleted.
- Memory VVol–When you create a snapshot and store the memory state or when you suspend a VM, this is created.
- Config VVol–represents a folder on a VVol datastore.
This statement about config VVols deserves a bit more attention I think. What does that really mean? Understanding config VVols is important when it comes to recovery etc. So let’s dig into this.
Continue reading What is a Config VVol Anyways?
Quick post. I did some light board videos together on vSphere Virtual Volumes. Lightboard videos are pretty fun to do, the unfortunate part is that I have horrible hand writing. So I immediately apologize for that.
A common question I get with these videos is how do you write backwards. I don’t. I am nowhere near that skilled, as you can see I can barely write forwards. I write normally which appears backwards and the video team mirrors the video.
This is a three part series, the entire playlist can be found here:
Continue reading VVol Lightboard Videos
Migrating VMDKs or virtual mode RDMs to VVols is easy: Storage vMotion. No downtime, no pre-creating of volumes. Simple and fast. But physical mode RDMs are a bit different.
As we all begrudgingly admit there are still more than a few Raw Device Mappings out there in VMware environments. Two primary use cases:
- Microsoft Clustering. Virtual disks can only be used for Failover Clustering if all of the VMs are on the same ESXi hosts which feels a bit like defeating the purpose. So most opt for RDMs so they can split the VMs up.
- Physical to virtual. Sharing copies of data between physical and virtual or some other hypervisor is the most common reason I see these days. Mostly around database dev/test scenarios. The concept of a VMDK can keep your data from being easily shared, so RDMs provide a workaround.
Continue reading Moving from an RDM to a VVol
Yes. Any questions?
Ahem, I suppose I will prove it out. The real answer is, well maybe. Depends on the array.
So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin. And therefore many users opted to not use thin virtual disks because of it.
So first off, why the difference?
Continue reading Do thin VVols perform better than thin VMDKs?
I have been talking a lot about Virtual Volumes (VVols) lately with customers and when I describe what they are a frequent response is “oh so basically RDMs then?”. ..
…ugh sorry I just threw up in my mouth a bit…
The answer to that is an unequivocal “no” of course, but the question deserves a thorough response.
So first let’s look at how they are the same, then let’s look at their differences. And not just how they compare to RDMs, but also VMDKs as you traditionally know them.
Continue reading Comparing VVols to VMDKs and RDMs
Virtual Volumes change quite a lot of things. One of these is how your storage volumes are actually connected. This change is necessitated for two reasons:
- Scale. Traditional ESXi SCSI limits how many SCSI devices can be seen at once. 256 in 6.0 and earlier and 512 in 6.5. This is still not enough when every virtual disk its own volume.
- Performance. Virtual Volumes are provisioned and de-provisioned and moved and accessed constantly. If every time one of these operations occurred a SC SI rescan was required, we would see rescan storms unlike this world has ever witnessed.
So VMware changed how this is done. Continue reading Virtual Volumes: VVol Bindings Explained
Virtual Volumes provide a great many benefits, some large, some small. Depending on the VM, recovering a deleted VM could be either of those.
With traditional VMFS, once you have selected “delete from disk” restoring that VM could have been a process. Either restoring from backup or hoping you had a snapshot of the VMFS on the array. Either way, you are probably going to incur data loss, as the last backup or snapshot is unlikely to be from the time right before the deletion.
Let me be VERY clear here. Regardless to the rest of this post, I am not saying once you move to VVols you do not need backup! You absolutely still do. VVols just give you a nice way to do an immediate recovery of the latest point-in-time without having to lose anything, assuming your array support it.
“Wait, did you say delete VM “AD” or VM “80”?”
“Um… definitely not AD that’s our active directory…”
Continue reading Recovering a Deleted Virtual Machine with VVols
With VMFS-6, space reclamation is now an automatic, but asynchronous process. This is great because, well you don’t have to worry about running UNMAP anymore. But since it is asynchronous (and I mean like 12-24 hours later asynchronous) you lose the instant gratification of reclamation.
So you do find yourself wondering, did it actually reclaim anything?
Besides looking at the array and seeing space reclaimed, how can I see from ESXi if my space was reclaimed?
Continue reading Monitoring Automatic VMFS-6 UNMAP in ESXi
EnableBlockDelete is a setting in ESXi that has been around since ESXi 5.0 P3 I believe. It was initially introduced as a way to turn on and off the automatic VMFS UNMAP feature introduced in 5.0 and then eventually canned in 5.0 U1.
The description of the setting back in 5.0 was “Enable VMFS block delete”. The setting was then hidden and made defunct (it did nothing when you turned it off or on) until ESXi 6.0. The description then changed to “Enable VMFS block delete when UNMAP is issued from guest OS”. Continue reading In-Guest UNMAP, EnableBlockDelete and VMFS-6