FlashArray XCOPY Data Reduction Reporting Enhancement

Recently the Purity Operating Environment 4.1.1 release  came out with quite a few enhancements. Many of these were for replication, certain new GUI features and the new vSphere Web Client Plugin is included. What I wanted to talk about here is a space reporting enhancement that was made concerning VAAI XCOPY (Full Copy). First some history…

First off a quick refresher on XCOPY. XCOPY is a VAAI feature that provides for offloading virtual disk copying/migration inside of one array. So operations like Storage vMotion, Cloning or Deploy from Template. Telling an array to move something from location A to location B is much faster than having ESXi issue tons of reads and writes over the SAN and it also therefore reduces CPU overhead on the ESXi host and reduces traffic on the SAN. Faster cloning/migration and less overhead–yay! This lets ESXi focus on what it does best: manage and run virtual machines while letting the array do what it does best: manage and move around data.

How is it done on the Pure Storage FlashArray? Well, we don’t believe in duplicating data inside of an array (would be a waste) or moving data (which doesn’t even make sense when it is one big storage pool and the volumes are just abstractions) we instead just manage metadata. So if a virtual disk (or VM) is moved from one volume to another, or it is cloned we simply move/copy metadata pointers. We don’t relocate or copy data, just add or move pointers to it. This makes XCOPY on the FlashArray extraordinarily fast. See more about that here.

xcopy

Secondly, a quick overview on what data reduction is. Data reduction on the FlashArray is created via compression, deduplication and pattern removal (this does NOT include thin provisioning–this is pointless to include). The amount of space that a host actually wrote (or thinks it wrote rather) is called “virtual space”. The “physical space” is how much is actually sitting on the SSDs. So let’s say a host wrote 100 GB, so 100 GB of virtual space. Through data reduction techniques just mentioned only 25 GB is actually written to SSDs, so 25 GB of physical space. Divide the virtual space by the physical space and you have a reduction of 4:1. A simplistic view, but that the gist.

Enough of that. What did we change?

XCOPY functionality on the FlashArray is very much based on our snapshot technology–it shares a lot of the same code. When it comes to our snapshots and data reduction reporting we do not count snapshots into data reduction numbers. We feel that would be a misleading way to report data reduction because it would artificially inflate reduction. For example:

(For simplicity’s sake) let’s say we have only one volume on an array and the data reduction (from compression, dedupe, pattern removal etc.) is nothing. So 1:1 (yes this is essentially impossible). But then we took a snapshot. If we included the snapshots into virtual space that would mean our virtual space doubled but our physical space stayed the same. So our data reduction rate goes from 1:1 to 2:1. If we took another snapshot it would go to 3:1 and on and on…

This is not real data reduction and should not be included in your reduction numbers in our view until it is actually used. So snapshot data is only counted once it has been associated with an actual volume–which means a host can use it. This makes it “real data” and therefore can be validly included in data reduction numbers.

Let me re-iterate: we do not count normal array-based snapshots or things like zeroes (from WRITE SAME or otherwise) in data reduction ratios.

So XCOPY. Since XCOPY was based on our snapshots, XCOPY was not included in the data reduction we report per volume or per array. So when a VM was cloned or deployed from a template we didn’t take credit for that reduction even though it was real data used by a real VM–meaning it wasn’t seen as writes from a host (seen as a snap) and wasn’t added to the virtual space. This made our virtual space number much lower than it should be. Leading to some confusion when comparing what you see in VMware and what you saw on the array.

So let’s use an example. We have a full 50 GB VM deployed that wasn’t reduced at all, so 1:1 (50 GB of virtual space and 50 GB of physical space). Clone that VM with XCOPY and the data reduction for that array remains at 1:1 because the virtual space remains at 50 GB as does the physical even though VMware sees 100 GB as written. Add another VM and everything is still 1:1 and 50 GB of virtual and physical space persists. You get the point.

This is now changed in 4.1.1. Let’s take the same 50 GB VM example but now running on 4.1.1. After cloning it once, the virtual space is now 100 GB with the static physical space of 50 GB. So now 2:1. Add another it is now 150 GB of virtual and 50 GB of physical, so 3:1.

Your next question might be, will this be retroactive? Meaning if we upgrade the array will I get credit for previous XCOPY or is this for only new XCOPY clones? It is indeed retroactive, we will report all past XCOPY reduction prior to the upgrade and after as described.

So an extreme example to elucidate the point: I had an array with 420 VMs deployed from a template using XCOPY using Purity 4.0.x. The data reduction is 4.4:1–this is mostly VMFS metadata reduction and some logs and things that don’t get copied with XCOPY.

xcopyextremebefore

Upgrade to 4.1.1 and these 421 copies of the same VM  are reported properly at 421:1.

xcopyextremeafter

We don’t show numbers in the GUI above 100:1 so that is why it just shows >100 to 1. So yes this is a very unrealistic example since the VMs were never powered on so the data never changed on them at all and it was essentially the only thing on the array. So what should YOU expect?

Here is another VMware-hosting array that I have that have a mix of XCOPY VMs that have been running for awhile, changing data and ones that were deployed from scratch etc. Similar to what you would see in the real world:

xcopyrealbeforexcopyrealafter

Went up by about 1. My other array went from 2.8:1 to 5.5:1. This is what I would expect in a more common environment.

A final note:

This does NOT reduce the used physical capacity, or improve our data reduction. While we are continuously improving our reduction behaviors–this particular change doesn’t improve it by itself.

We are simply improving our data reduction reporting to what it should be. It is now reported the same regardless no matter if the VMs were deployed entirely without XCOPY or with it. It is now providing more accurate and consistent information across data cloning mechanisms. So if you see your data reduction ratio jump a little or a lot after an upgrade–this is why.

2 Replies to “FlashArray XCOPY Data Reduction Reporting Enhancement”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.