An increasingly common use case for Active-Active replication in vSphere environments is vSphere Metro Storage Cluster (vMSC) which I wrote about in this paper recently:
This overviews how a stretched vSphere cluster interacts with the active-active replication we offer on the FlashArray called ActiveCluster. Continue reading “Tech Preview: vCenter Site Recovery Manager with ActiveCluster”
So in a blog series that I started a few weeks back (still working on finishing it), I wrote about managing snapshots and resignaturing of VMFS volumes. One of the posts was dedicated to why I would choose resignaturing over force mounting almost all of the time.
An obvious question after that post is, well when would I want to force mount? There is a situation where i think it is a decent option. A failover situation where the recovery site is the same site as the production site, in terms of compute/vCenter. The storage is what fails over to another array. This is a situation I see increasingly common as network pipes are getting bigger.
Continue reading “Semi-transparent failover with VMFS and Active/Passive Replication”
Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.
***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***
Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:
- Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
- Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
- Allow for thin virtual disks to leverage a larger transfer size than 1 MB.
The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.
Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”
I’ve have been working with VMware’s vCenter Site Recovery Manager since the tail end of the 1.x release and I have to say this is the most excited I have been about a Storage Replication Adapter release that I can remember. Since I started with Pure in late April 2014 I have been working with our development team and product management to design and shape this initial release of the Pure Storage SRA. I have to say it has been a blast–a really great team that does some really amazing work! It is now officially approved and posted on VMware’s compatibility guide and SRA download site:
Continue reading “Pure Storage FlashArray SRA for Site Recovery Manager”
As I wrote about in a recent post the 5.1 version of the SRDF Storage Replication Adapter was updated in a service release. Similar fixes and a few other changes have been also added to the SRDF SRA version 5.5. If you are running VMware Site Recovery Manager version 5.1 or 5.5 this is the SRA you should be using. SRM 5.0 users can only use the 5.1 SRA (while the 5.1 SRA supports SRM 5.0 and 5.1 I recommend SRM 5.1 users use the 5.5 SRA).
So what’s new in the latest SR for SRDF SRA (184.108.40.206)?
Continue reading “Service release SRDF SRA 220.127.116.11 for VMware Site Recovery Manager”
Quick overview, the SRDF Storage Replication Adapter for VMware SRM TechBook is an in-depth implementation guide focused on how to install, configure and manager SRDF with VMware’s vCenter Site Recovery Manager product. Overview of SRDF, the involved tools and how to perform test recovery, migrations and disaster recovery failovers.
My last hurrah with this TechBook! Since I have moved on to a new (and exciting!) role within the Open Innovation Lab in the EMC Office of the CTO it is time for me to pass on the torch of the SRM TechBook.
Anyways you can find the updated TechBook here:
Continue reading “Updated SRDF Storage Replication Adapter for VMware SRM 5.5 TechBook”
Late last week I posted a summary blog on the latest SRDF Storage Replication Adapter for VMware Site Recovery Manager here:
I detailed out the new features etc for the 5.5 release and briefly mentioned the latest release of the Virtual Storage Integrator Symmetrix SRA Utilities that helps users configure the SRDF SRA. On 10/25, we posted the latest release of the SRA Utilities, version 5.6.
Version 5.6 of the SRA Utilities have been enhanced in tandem with the SRDF SRA to support the new features that the SRA has to offer. Most of these enhancements relate to the masking control functionality that is newly supported by the SRA.
Continue reading “Virtual Storage Integrator: Symmetrix SRA Utilities 5.6”
Today EMC posted the updated SRDF Storage Replication Adapter (SRA) 5.5 for Symmetrix VMAX arrays to their website:
It will be on VMware’s site shortly:
This adapter includes support for VMware vCenter Site Recovery Manager 5.5 (as well as “legacy” support for SRM 5.1).
Continue reading “Updated SRDF Storage Replication Adapter released for SRM 5.5”
Migrating a virtual machine that uses 100% virtual disks is a simple task due to VMware Storage vMotion but migrating a VM that uses Raw Device Mappings from one array to another is somewhat trickier. There are options to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs to be supported. Other options are host based mechanisms/in-guest mechanisms to copy the data from one device to another. That option sometimes can be complex and requires special drivers or even possibly downtime to fully finish the transition. To solve this issue for physical hosts, EMC introduced for the Symmetrix a feature called Federated Live Migration.
Federated Live Migration (FLM) allows the migration of a device from one Symmetrix array to another Symmetrix array without any downtime to the host and also does not affect the SCSI inquiry information of the original source device. Therefore, even though the device is now residing on a completely different Symmetrix array the host is none the wiser. FLM leverages Open Replicator functionality to migrate the data so it has some SAN requirements–the source array must be zoned to the target array. A FLM setup looks like the image below:
Continue reading “Migrating a Raw Device Mapping with Federated Live Migration”