Pure Storage and Equinix Metal-as-a-Service

A few weeks back Pure Storage and Equinix introduced a joint offering creatively entitled Pure Storage on Equinix Metal.

clipboard_edb7a66e0dd3d27e5896192b590b0ff48.png
clipboard_eb83934d1ea7a256de43cbecea5ad6519.png

You can view more details on the offering here.

Pure Storage Chart
https://metal.equinix.com/solutions/pure-storage/

In short, you can deploy compute on demand with automated OS build processes and they charge in a similar fashion to the public cloud:

Choose a datacenter:

A server size:

Then a compute type:

Then how many you want and their names:

Then you have some optional parameters like custom SSH keys to be loaded automatically or specific network assignments. This all of course can be automated through their API:

https://metal.equinix.com/developers/api/

And a significant amount of automation kits (Terraform being a common one) available in their Github repos

https://github.com/packet-labs/ (Equinix Metal used be called Packet)

Use Cases

I think there is a lot of opportunity here–not everything can and will go into the public cloud, but doesn’t make the on-demand model not attractive. Being able to stand up baremetal only when you need it and most importantly, only pay for it when you need it makes a lot of sense. Especially for environments that might predictably or unpredictably change workloads needs:

  • Disaster Recovery. Keep a small pilot light footprint, running a management host with some services (like a vCenter VM and Site Recovery Manager) and deploy and expand compute upon failover. Reduce footprint if/when you fail back.
  • Remote Sites. Use their global footprint to bring services closer to end users
  • VDI. VDI demands go up and down not only over time, but over the course of a day, for a given region, and for locality bring the VDI closer to the end user by deploying multi-site.
  • Upgrades/Lifecycle. Instead of buying new compute and sunsetting it, you don’t even need to upgrade. Just deploy new and then jettison the old servers. This is particularly useful for virtualized environments where there is little to no state in the “OS” like ESXi.
  • No dealing with HW. I mean this is basically the point–you get full access and control of the hardware, but the power, cables, switching infrastructure is dealt with by Equinix. You consume the end result.

The use cases grow from there. I think one of the specifically interesting deployments here is vSphere. On-demand, movable, and rebalance-capable compute is something vSphere does well–matched with on-demand physical resources is quite compelling. The benefit here, over something like VMware Cloud on AWS for instance is that it isn’t locked down. You can use all of the integrations and tools you are used to. Things that work in your own datacenter will work here too.

One thing required to make this all work much better is external storage. If you need to copy/restore/rebalance data every time you add or remove compute, the process would be greatly slowed–really reducing the usefulness of the more elastic use cases. And for situations like disaster recovery–having the storage in the compute requires the compute to be sitting there waiting there–and the “pilot light” needs to be large enough to offer the capacity needed to store the full infrastructure. And when it expands, how will the other servers share that data?

This is where the FlashArray comes in. FlashArray is offered up with the Pure-as-a-Service licensing (you pay for what you write) through Equinix and literally sits in the Equinix Metal datacenters. You can replicate from your own datacenter, or another Equinix datacenter–or if desired replicate from a FlashArray via CloudSnap to object storage like AWS S3 or Azure Blob (Equinix Metal also offers cloud access via ExpressRoute or DirectConnect) and restore to the PaaS FlashArray.

VMware on Pure

This allows you to take advantage of all of the Pure Storage VMware features like our vSphere Plugin, vVols, SRM, vRealize, VAAI, Tanzu, etc. All with on-demand compute of Equinix Metal.

From a network perspective, Equinix offers a few modes:

  • Layer 3: Has externally routable addresses with no VLANs.
  • Layer 2: Internally routable addresses and when there is more than one network in use it will use VLAN tagging
  • Hybrid: Support for both layer 2 and 3 at once

Equinix Metal generally gets deployed with 2 physical NICs (the larger compute instances come with 4). Each one of the above options also have two flavors (except layer 3, that is only bonded):

  1. Bonded. This means that LACP is used at the switch, and both NICs are configured identically and you must use LAGs in the OS.
  2. Unbonded. The NICs are not engaged in LACP and no LAGs are required.

Currently, if the NICs are unbonded you cannot assign the same VLAN to both NICs, so for network redundancy you want to go the bonded route, and therefore you must use LAGs. So for vSphere this means virtual distributed switches with LAGs configured with both NICs.

clipboard_e958aa44846c715f1cebf2d923cf9440c.png

So the process is basically, setup non-redundant standard switches then migrate over to a LAG-based VDS. Now certainly there are um opinions of LACP in vSphere, but I will say it works quite well these days and is fairly easy to configure. But there are benefits to port binding instead and vSphere in general has great support on uses multiple NICs at once for redundancy that doesn’t require LACP/LAGs. In particular, things like VCF requirements don’t permit it. But if you look at the Equinix Public roadmap support for unbonded shared VLAN support is coming quite soon.

https://metal.canny.io/networking-features/p/private-cloud-layer-2-networking-for-vcf

So more flexibility in network configuration is coming!

For now, check out my documentation on setting up Equinix vSphere environments:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/Pure_Storage_on_Equinix_Metal

Check out some walkthrough video guides here in this playlist:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.