Most of us try to optimize how we personally spend money–and a common way is to examine your overall operating expenditures and figure out how you can reduce it. How much do I spend on groceries? Can I buy cheaper groceries? Can I buy less? What streaming services do I pay for? Should I cancel all of them? Some of them? Drop down on the tiers? Etc.
This is where we find money in our budget. We certainly try increase our income, but at the same time a surefire and impactful way to improve “cash at hand” is to reduce, in some way, what we spend. With the goal of not returning that money to our employer of course, but to invest it in some more impactful way to improve our life or our future. Maybe buying something cool, investing in retirement, taking a trip, whatever. Even if you do increase your income, making these changes just allow you to have even more money to spend in more impactful ways.
Cloud costs are no different. Budgets don’t always increase–an easier path to do more is to make better use of the money you do have to spend. This is essentially a universal truth.
Three of the most common expenses in the cloud are VMs, networking, and storage. This is almost invariable across the cloud vendors. Where you spend the most money is usually the right place to start when it comes to cost optimization. Best bang for your buck.
With VMs, it is often about right sizing, auto-scaling, or instance type (spot, reserved, etc).
With networking it is often about design, being careful of egress and making the right use of endpoints and related efficiences.
What about storage? This is trickier. We talk about object quite a bit, and of course there is area to optimize with tiers and the related, but an often missed but often huge part of that cloud storage bill is block. Block that gets paid for because, well, what can you do? It is like your power bill, you use what you use. It is tough to make appreciable changes via your behavior.
You can’t just “store less data”. That data is important–often that data is your company. You can try to reduce provisioned IOPS or throughput. Drop storage tiers. Use less features. Use less durable or available storage. Etc. But these options don’t really fall under the category of optimizing. These aren’t “efficiencies”. These are sacrifices.
These choices punish your applications. These create tradeoffs. Less snapshots -> less point in times. Less durable -> more risk. Less performance -> worse experience. A bit of misnomer with cloud storage is that you pay for what you use.
You pay for what you think you are going to use. You provision for the IOPS peaks. You provision for expected capacity needs. Sure you can make changes to both, but how frequently? How quickly can you make that change? Can you go down? Go down independently? Are capacity and performance tied? You need 10,000 IOPS for one hour a day, but what about the other 23 hours? If you delete data in the app does the capacity usage go down?
There are a ton of places where problems occur and you can’t really just right size your way out of it. What if the app changes?
This is a generic example of a problem we solved at Pure Storage years ago.
Let me tell you a story.
It was the year of VDI. No, wait. This is the year of VDI. Well uh definitely this year is. Hmm maybe this year.
An on-going problem. Why? Well VDI was tricky. You either size for performance or cost. Impossible to do both. VDI desktops went on datastores which had performance tied to arrays which were tied to some number of spindles or cache or tiers. Or all of them. VDI started hot in the morning (everyone logs in), then got cold, then got hot again in the log offs, refreshes, virus scans, etc.
To make VDI work, the experience needed to be as fast or faster than a local laptop. This mean latency and a whole lot of IOPS and throughput. So pin the SSD tier, partition some cache, create a tiering policy. Create tons of datastores–spread it out! But that part mattered for like an hour. Then mostly not at all. Using your productivity apps that was mostly in-memory or network traffic. So all of that wasted performance, hardware, etc. You could then switch to other tiers, but could it be done fast enough? If it wasn’t the performance would be bad the next day. On and on. VDI failed because it was either too expensive or too slow. Or both. Or frankly–too complex to manage.
Pure came along. Our performance wasn’t tied to some tier, or cache, or spindles (certainly not that!). It was tied to the array. One given volume could offer the full performance of a Pure Storage FlashArray. But once it didn’t need it any more any other volume could instantly use that pool of performance or share in it. This made volume create and provisioning simple from a performance perspective. Through software, we found a way to throw less hardware at the problem, not strand resources, and be performant (flash). And of course how we made that cost effective was through things like thin provisioning and our impressively powerful data reduction. VDI got liked 10:1 or more–making it cheap and fast!
Since these VDI arrays went cold during the day, it allowed customers to put apps on them that heated up in the day–further effectively dropping the cost of the VDI solution via consolidation. “Thin provisioning” of performance.
These same concepts are powerful cost cutting tools. What we use(d) to make flash cheap can make cloud storage cheap. Without sacrificing features or durability or availability.
Right-sizing storage is helpful and can make some minor impacts, but thin provisioning (cut 30% off the cost), data reduction via compression, dedupe, and pattern removal (drop the cost in 1/3 or more), really start to make true impacts. Our replication preserves data reduction–reducing what might be sent on the wire via individual apps all sending data independently. Globally reduced, differential, and never resent even across different volumes. No provisioning of IOPS or throughput per app–they can use what they want when they need it and then return it to the pool. Features like UNMAP remove space that is no longer use and keep usage efficient.
We introduced these features into FlashArray to make flash cost effective. We offer these same features into Pure Cloud Block Store to make block storage in the cloud cost effective. This is non-trivial cost optimization. Importantly too: this is not a tradeoff. You often will gain features moving to our storage: replication, instant snapshots, durability, availability, automatic “safemode” protection, hybrid replication, etc. Cost optimization without tradeoff. This is what we hope for when looking at our family budget–this is what we offer in the public cloud for block storage.
This results in success for everyone involved:
- You reduce the money you spend on storage significantly
- You gain features and enterprise storage abilities
- The storage you use just gets better (we continue to improve CBS, so each upgrade improves your storage for all of your applications)
- You free up money to spend elsewhere
- You can remove licensed features in the applications that cost money and CPU and rely on our platform (we don’t charge for features) which of course then also saves costs.
Hmm you say your cloud sales team won’t like this? Actually–they will! You spending the bulk of your bill on storage is not sticky for them–neither is compute. They would prefer you invest that money in services or higher level offerings–not infrastructure. And frankly wouldn’t you as well? In a world where budgets are not increasing this is a way to cut a big portion of your bill down and spend it far more wisely.
Furthermore, our product is interfaced with and consumed in the same fashion in multiple clouds–allowing for consistency in hybrid or multi-cloud environments. As well as the resultant data mobility. While the product in the back is different: Azure, AWS, on-premises, to the application or infrastructure consumer it is the same.
Want to learn more? Check it out: https://www.purestorage.com/solutions/cloud/hybrid.html