In the previous post in this series I explored how to run a VVol-based test failover of a virtual machine. Now I will walk through running an actual failover.
There are two types of failovers; a planned migration (everything is up an running) and a disaster recovery failover (part or all of the original site is down).
For this post, I will start with running a planned migration.
Continue reading PowerCLI and VVols Part VIII: Running a Failover–Planned Migration
Another post in my series on VVols and PowerCLI, for previous posts see these:
This post will be about managing one-off snapshots with VVols on the FlashArray with PowerCLI.
One of the still semi-valid reasons I have seen DBAs say “I dont want to virtualize because…” Is that they have simple snapshot/recovery scripts for their physical server that allows them to quickly restore DBs from snapshots. Doing this on VMFS requires A LOT of coordination with the VMware layer.
So they tell the VMware team–“okay I will virtualize but I want RDMs”. Well the VMware team says “well we’d rather die”
…and around in circles we go…
VVols provides the ability to provide this benefit (easy snapshot stuff) but still get the benefits of VMware stuff (vMotion, Storage vMotion, cloning, etc) without the downside of RDMs.
So let’s walk though that process.
Continue reading PowerCLI and VVols Part V: Array Snapshots and VVols
About four years ago, we (Pure Storage) released support for our asynchronous replication and Site Recovery Manager by releasing our storage replication adapter. In late 2017, we released our support for active-active synchronous replication called ActiveCluster.
Until SRM 6.1, SRM only supported active-passive replication, so a test failover or a failover would take a copy of the source VMFS (or RDM) on the target array and present it, rescan the ESXi environment, resignature the datastore(s) then register and power-on the VMs in accordance to the SRM recovery plan.
The downside to this of course is that the failover is disruptive–even if there was not actually a disaster that was the impetus for the failover. But this is the nature of active-passive replication.
In SRM 6.1, SRM introduced support for active-active replication. And because this type of replication is fundamentally different–SRM also changed how it behaved to take advantage of what active-active replication offers. Continue reading Site Recovery Manager and ActiveCluster Part I: Pre-SRM Configuration
vSphere 6.7 core storage “what’s new” series:
A while back I wrote a blog post about LUN ID addressing and ESXi, which you can find here:
ESXi and the Missing LUNs: 256 or Higher
In short, VMware only supported one mechanism of LUN ID addressing which is called “peripheral”. A different mechanism is generally encouraged by the SAM called “flat” especially for larger LUN IDs (like 256 and above). If a storage array used flat addressing, then ESXi would not see LUNs from that target. This is often why ESXi could not see LUN IDs greater than 255, as arrays would use flat addressing for LUN IDs that number or higher.
ESXi 6.7 adds support for flat addressing. Continue reading What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support
There are a variety of ways to assign and set a SPBM Policy to a VM. I recently put out a workflow package for vRO to everything VVols and Pure:
vRealize Orchestrator VVol Workflow Package
I also specifically blogged about assigning a policy to a VM with vRO:
Assigning a VVol VM Storage Policy with vRO
How do you do this with PowerCLI?
Continue reading PowerCLI and VVols Part I: Assigning a SPBM Policy
An increasingly common use case for Active-Active replication in vSphere environments is vSphere Metro Storage Cluster (vMSC) which I wrote about in this paper recently:
This overviews how a stretched vSphere cluster interacts with the active-active replication we offer on the FlashArray called ActiveCluster. Continue reading Tech Preview: vCenter Site Recovery Manager with ActiveCluster
I recently did a VMUG webcast on VVols and there were a ton of questions and unfortunately I ran out of time and could not answer a lot of them. I felt bad about that, so I decided to follow up. I was going to send out emails to the people who asked, but figured it was simpler and more useful to others to just put them all here.
See the VMUG VVol webinar here:
You can get my slides here.
Would VVols replace the requirements for RDM’s?
Answer: Maybe. It depends on why you are using RDMs. If it is simply to allow sharing or overwriting between physical and virtual. VVols will replace RDMs. If it is to make it easier to restore from array snapshots, VVols will replace them. If it is for Microsoft Failover Clustering, VVols are not supported with that yet. You still need RDMs. Though VMware is supposed to be adding support for this in the next release. See this post for more info. Continue reading VVol VMUG Webinar Q&A Follow Up
Ok finally! I had this finished awhile ago, but I wrote it using our version 2.0 plugin–so I couldn’t post it until the plugin was certified by VMware. That plugin version is now certified and posted on the VMware Solution Exchange (see my post here).
Moving forward, we will likely be posting new workflows in various packages (working on an ActiveCluster one now), instead of including them directly in our plugin. This will make it easier to update them and add to them, without also having to generate an entire new plugin version.
So first, download and install the v2 FlashArray plugin for vRO and then install my workflow package for VVol on the VMware Solutions Exchange:
Continue reading vRealize Orchestrator VVol Workflow Package
We have published the FlashArray plugin 2.0 for vRealize Orchestrator on the VMware Solutions Exchange! Download it here:
We put a lot of work into this one and I am quite excited for customers and partners to start using it.
There are three primary enhancements:
- New workflows
- New actions
- New scriptable objects
Continue reading FlashArray Plugin 2.0 for vRealize Orchestrator
Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).
This multi-part series will break it down in the following sections:
- VMFS and thin virtual disks
- VMFS and thick virtual disks
- Thoughts on VMFS Capacity Reporting
- VVols and capacity reporting
- VVols and UNMAP
Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.
NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts. Continue reading VMware Capacity Reporting Part IV: VVol Capacity Reporting