Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).
This multi-part series will break it down in the following sections:
- VMFS and thin virtual disks
- VMFS and thick virtual disks
- Thoughts on VMFS Capacity Reporting
- VVols and capacity reporting
- VVols and UNMAP
Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.
Continue reading VMware Storage Capacity Reporting Part I: VMFS and Thin Virtual Disks
Yes. Any questions?
Ahem, I suppose I will prove it out. The real answer is, well maybe. Depends on the array.
So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin. And therefore many users opted to not use thin virtual disks because of it.
So first off, why the difference?
Continue reading Do thin VVols perform better than thin VMDKs?
Having a best practices conversation the other day with a customer and the usual topic came up about any recommendations when it comes to virtual disk type. We had the usual conversation thin or thick, the ins and outs of those two. In the end it doesn’t matter too much, especially with some recent improvements in ESXi 6.0. The further question came up, well what about between zeroedthick and eagerzeroedthick? My initial reaction was that it doesn’t matter for the most part. But we had just had a conversation about Space Reclamation (UNMAP) and I realized, actually, I did have a big preference and it was EZT. Let me explain why.
Continue reading ZeroedThick or EagerZeroedThick? That is the question.
Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.
***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***
Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:
- Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
- Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
- Allow for thin virtual disks to leverage a larger transfer size than 1 MB.
The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.
Continue reading XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks