As you may or may not be aware, I am the sole author of the SRDF Storage Replication Adapter for VMware Site Recovery Manager Techbook. The Techbook for those of you who haven’t used it or aren’t aware of EMC Techbooks is a implementation guide for SRDF with SRM. Best practices, how-to, hints, etc.
EMC offers a variety of tools to manage/enhance your virtual or physical environments–some free, some licensed. In most cases when you think of EMC tools for VMware one conjures up the free Virtual Storage Integrator which is more commonly referred to as VSI.
VSI is a great tool and continues to be improved through each version and allows you to provision storage, manage pathing, configure SRM etc. The one thing it does not have is a way to automate these tasks through an API or CLI. This is where another product comes in–one that many do not associate with VMware. The EMC Storage Integrator (ESI) is a lot of times seen as the Microsoft version of VSI–but that isn’t really true at all. While it might have started out that way and does indeed support Hyper-V and has a ton of Microsoft-specific features it is really the heterogeneous storage integrator. Importantly it has a very handy and powerful feature–PowerShell cmdlets.
Today I’ll be filling in for Cody on his blog since I couldn’t find time for one if I tried and he has been gracious enough to let me post when topics come up. My name is Drew Tonnesen (@drewtonnesen) and I’m a systems engineer in the ESD organization at EMC (basically the engineering side of the house). I work on the integration of virtualization technologies (mostly VMware) with the Symmetrix platform but also focus on VPLEX and RecoverPoint. Cody and I have worked together for many years now and though we get moved around a bit and re-organized, we both continue to work on VMware/Symmetrix integration like our TechBook (yes, shameless plug: http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf).
So since I mentioned I work on Symmetrix, VPLEX, and RecoverPoint I thought what better post than one on all those – and Oracle Extended RAC to boot. I’ve recently updated my white paper on this topic and it can be found here:
A bit of a long one here. At some point this might turn into a white paper (update: it is now). But for now…
Check out my post on the Pure Storage integration with Log Insight here!
UPDATE: We have released a content pack that automatically configures dashboards and fields for the VMAX, it will save you a lot of work and the pack is free! Read about it here:
And updated here:
Earlier this summer VMware announced a new product called vCenter Log Insight which just went GA today. You can download it and try it out from here:
One of the documents that my colleague Drew Tonnesen (@drewtonnesen) and I maintain is a white paper that explains the how, what, why, when, etc. of using VMware’s VAAI block primitives (WRITE SAME, XCOPY, ATS and UNMAP) with Symmetrix VMAX storage systems. We update this document around twice a year or as needed to take into account new Enginuity releases or VMware releases. We just posted the latest update this weekend on EMC’s website:
As many are probably aware, RecoverPoint 4.0 recently released support for Point-In-Time test recovery and recovery for VMware vCenter Site Recovery Manager. In conjunction with the RP SRA and the Virtual Storage Integrator (VSI) users can select a PiT in the past instead of automatically being forced to use the latest copy.
Since this came out (and many times prior) I have been asked can we do this with the SRDF SRA and TimeFinder along with VMware SRM? The answer is yes! The process though is somewhat different of course. By far this question is mostly targeted for test recovery only, as most conversely prefer up-to-date images when they actually failover. So this post will focus on test recovery. Continue reading “Point-in-time test recovery with SRDF and VMware SRM”
Migrating a virtual machine that uses 100% virtual disks is a simple task due to VMware Storage vMotion but migrating a VM that uses Raw Device Mappings from one array to another is somewhat trickier. There are options to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs to be supported. Other options are host based mechanisms/in-guest mechanisms to copy the data from one device to another. That option sometimes can be complex and requires special drivers or even possibly downtime to fully finish the transition. To solve this issue for physical hosts, EMC introduced for the Symmetrix a feature called Federated Live Migration.
Federated Live Migration (FLM) allows the migration of a device from one Symmetrix array to another Symmetrix array without any downtime to the host and also does not affect the SCSI inquiry information of the original source device. Therefore, even though the device is now residing on a completely different Symmetrix array the host is none the wiser. FLM leverages Open Replicator functionality to migrate the data so it has some SAN requirements–the source array must be zoned to the target array. A FLM setup looks like the image below:
One of the products or maybe rather solutions that I work a lot with is integrating Symmetrix Remote Data Facility (SRDF) with VMware’s vCenter Site Recovery Manager. For some shameless self-promotion (I suppose can probably drop that phrase when writing on this blog because by definition a blog is inherently self-promotion, but I digress) of the implementation guide I write it can be found here:
First post! As I am fooling around with the templates and colors and such and getting used to blogging I figured I would kick things off with something simple and one of my favorite unheralded new features of Solutions Enabler (SYMCLI) 7.6 that was released at EMC World 2013: “quick meta creation”.
****UPDATE: Apparently this was enabled long before SE 7.6, SE 7.3 at least actually, so you probably already have this feature, thanks to Jason Moreland for pointing this out****
As anyone familiar with the VMAX is most likely aware Symmetrix logical devices have a size limit of 240 GB. And in most virtual environments the size of clustered file systems that are desired, such as VMFS, usually need to be much bigger than that. So the solution on the VMAX array is to create what we call a metavolume (which I will refer to as a meta henceforth because I am a lazy typist). This is a simple logical association of multiple VMAX devices and are manipulated to look like one larger device which allows the size of a device as seen by the host to be VERY large (255 total members possible x 240 GB size each–you do the math). These devices can be “connected” together either via concatenation or via striping.
Well of course this is old news, why is this the least bit interesting?