VVols: A Whole New World for SQL Server Virtualization

Ah, VVols. The VMFS Datastore killer. And very soon, the RDM killer.

Virtual Volumes (VVols) is a spec from VMware that allows storage vendors implement virtual disks as they see fit. On FlashArray, we’ve implemented VVols virtual disks as just regular volumes on the array.

Think about what that means for a second.

It means that you now get virtual disk granularity of VMs for not only data services on the array, like snapshots, and clones – but you also get virtual disk granularity for replication. You’re no longer forced to snapshot/clone/replicate entire datastores, or dealing with pesky, slow SCSI bus rescans and even more painful datastore resignature operations.

For a technical introduction to VVols, go ahead and watch this playlist with videos from my coworker and VMware extraordinaire, Cody Hosterman. Keep reading after for a discussion on running SQL Server workloads on VVols.

Done with the videos? Cool. If not, make sure you watch them later – they have the best explanation of how VVols work that I’ve seen out there. And even if you don’t use FlashArray, the explanation and concepts apply (mostly) the same.

SQL Server on VVols: The Good, The Bad, and The Ugly

Did I scare you with the title? Sorry, that was meant to throw you off.


With VVols, you maintain the performance of an RDM with the agility of a VMDK. No more worrying about thin vs eager zero thick or any of that nonsense!.

Moving between physical volumes, RDMs and VVols is extremely straightforward on FlashArray. A move to VVols from VMFS basically entails a simple storage vMotion, which on FlashArray is an array deferred, metadata-only operation. Since FlashArray is a data reducing array, a storage vMotion from VMFS to VVols is just a VAAI XCOPY operation. Your new “copy” will only consume a little bit of space for the metadata, and nothing else. An RDM to VVols migration is also pretty straightforward, and there’s no VMFS datastore “container” to worry about. It’s exactly the same layout on the array, whether you go physical, RDM, or VVols. And you’re not “stuck” with VVols once you make the decision to go down that route for a given virtual disk. If you need to go back to VMFS, just storage vMotion it back, simple as that.

When Can I NOT Use VVols Then?

There is only one scenario where you can’t quite use VVols for SQL Server, and that’s when you have shared storage between SQL Server instances, which is the case for Failover Cluster Instances (FCI), which you probably just call “clusters” and VMware keeps calling “MSCS” (for Microsoft Cluster Server), which is nomenclature from 2003, feels like 1989, and no one should be using anymore.

However, there’s good news on that front! VMware will support SCSI-3 reservations on VVols starting with version 6.7, which is in beta right now from VMware. Go ahead and read Eric Seibert’s blog post on VVols futures for more information on that. At that point, I don’t want any of you even mentioning VMFS to me, unless you’re running a version of vSphere prior to 6.5 (which is our requirement for VVols). But quite honestly, VVols is a pretty compelling reason to upgrade to 6.5, or 6.7 when it comes out!

So What About Cloning?

My coworker and former (should I say recovering?) Oracle DBA Somu Rajarathinam wrote a great post on Oracle database cloning with VVols and FlashArray. Rather than me just translating that post to SQL Server lingo, I’ll just link to it here and let you enjoy. The concepts on SQL Server are extremely similar so just picture in your head what that would look like.

That’s it for this post, but let’s be sure to continue the conversation on Twitter by mentioning me (@DBArgenis), Cody (@codyhosterman) or just using the hashtag #PureStorage.

Thanks for reading,



16 Replies to “VVols: A Whole New World for SQL Server Virtualization”

  1. Thank you for the release timeframe for vSphere 6.7 (“slated for release early next year”). I had not seen that published anywhere yet. VMware must have released people from the NDA.

      1. Well with Pure Storage re-blogging this article, I’ll take it as somewhat official 🙂 They’re one of those fancy VMware Elite Technology Alliance Partners after all.

  2. I’m bummed! I was running down the VVols then learned the replication service we were going to use doesn’t support them yet! Is that more on the replication vendor or VMWare side? Last I saw, SRM wasn’t supported either.

    1. 2nd that. We were trying to implement a new project with Pure and vvol only to find out that SRM is not on the supported list.

      1. I am doing what I can to push VMware to do this. I think we are getting close, but please open up requests to VMware for SRM and VVol support. This is not something storage vendors can add support, it is entirely up to VMware to get SRM to communicate with VASA.

  3. Have you seen a script to automate prod MS SQL data and log drive snap cloning to dev servers leveraging vvols? We use this functionality heavily today, with in guest mounted iSCSI volumes and would like leverage vvols.

  4. Hi,

    I am setting up a Microsoft Failover under the following parameters:

    – Vmware Vsphere 6.7
    – Pure Storage FlashArray M10R2
    – Cisco UCS Blade B200 M3
    – Windows Server 2016
    – All Disks are on VVols (OS and Data Disks are on separate VMware Paravirtual SCSI Controllers(

    When I try to validate the configuration it throws me an error message that says about the SCSI-3 Persistent Reservation.

    Is there any additional step I must do on the storage or Vmware to make the SCSI-3 PR available to my VMs.

    Hope you can help me with this.

    1. This will currently not work on Pure. In our testing we identified an issue in working with VMware that prevents MSCS from working with VVols on Pure. VMware has created a patch but has not yet released it and I do not yet know the ETA. Until then, you will have to (unfortunately) continue to use RDMs if you want to do failover clustering.

      1. Thanks for your response Mr. Hosterman!!

        So I did reconfigure my Nodes with RDMs but when I validate the configuration for the MSCS it still gives me this really annoying “SCSI-3 PR” message.

        Maybe I am missing something in between, it seems to be not supporting the SCSI-3 Commands but the M10 Pure Array is totally compatible as far as I know.

        There is some additional config I should do on Vmware or Windows side ??

        Best regards.

        P.S.: Keep on the good work!! All the research and development you have done on the integration between vmware and pure is totally awesome.

        1. Hi Jaroslav,

          Alternatively, You can try with PSP_MRU or PSP_FIXED. I have heard from VMware team that the issue is only with PSP_RR.


  5. Hopefully, Cody will have good news for us that the Pure array can now work with scsi3 reservations on Vmware since it was over a year ago they discussed this. I can’t find any info online.

    If so how do you enable it??

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.