Recently I had a partner/customer who was migrating a lot of SAP data from one VMAX to another VMAX and they ran into an issue they weren’t sure how to solve, well at least what the best way to solve it was. This person had a ton of data on the VMAX and more than a few TimeFinder/VP Snap point-in-time copies of each SAP volume that they used for testing/recovery or backup.
For those of you unfamiliar with VP Snap it is a rather new (introduced with 5876) method of local replication on the VMAX that leverages the space-efficiency benefits of TimeFinder/Snap but also offers the flexibility of configuration provided by TimeFinder/Clone.
Continue reading Migrating TimeFinder/VP Snap PiT copies to another VMAX
On September 9th EMC released a new SR for Enginuity 5876.251. For the most part this is a variety of fixes and minor enhancements but there is one new hardware feature of note. VMAX has introduced support for 16 Gb FC I/O Modules. So if you have some hosts that are using 16 GB HBAs and want to make full use of them you are gonna want this Enginuity upgrade. Of course you will still have to purchase new front end ports for the VMAX as the current FC ports will not be the correct hardware to support 16 GB.
Continue reading New VMAX Enginuity Release 5876.251 and Solutions Enabler 7.6.1
This is a topic that I get asked about a lot and a recent internal email thread prompted me to write a post about it. On a Symmetrix array if you want a volume larger than 240 GB you need to create a metavolume. When creating a metavolume you have two configuration choices; concatenated or striped. There are many benefits to striped over concatenated (all of them performance-related) but one disadvantage. Due to their nature striped metavolumes are harder to expand. Until a few years ago thin striped metas couldn’t even be expanded online. So the decision was easy–do you think you will need to expand or not. In Enginuity 5875 and Solutions Enabler online expansion of striped metavolumes was allowed.
Continue reading Expanding a Symmetrix Striped Metavolume
EMC offers a variety of tools to manage/enhance your virtual or physical environments–some free, some licensed. In most cases when you think of EMC tools for VMware one conjures up the free Virtual Storage Integrator which is more commonly referred to as VSI.
VSI is a great tool and continues to be improved through each version and allows you to provision storage, manage pathing, configure SRM etc. The one thing it does not have is a way to automate these tasks through an API or CLI. This is where another product comes in–one that many do not associate with VMware. The EMC Storage Integrator (ESI) is a lot of times seen as the Microsoft version of VSI–but that isn’t really true at all. While it might have started out that way and does indeed support Hyper-V and has a ton of Microsoft-specific features it is really the heterogeneous storage integrator. Importantly it has a very handy and powerful feature–PowerShell cmdlets.
Continue reading Scripting ESI PowerShell cmdlets and VMware
As many are probably aware, RecoverPoint 4.0 recently released support for Point-In-Time test recovery and recovery for VMware vCenter Site Recovery Manager. In conjunction with the RP SRA and the Virtual Storage Integrator (VSI) users can select a PiT in the past instead of automatically being forced to use the latest copy.
Since this came out (and many times prior) I have been asked can we do this with the SRDF SRA and TimeFinder along with VMware SRM? The answer is yes! The process though is somewhat different of course. By far this question is mostly targeted for test recovery only, as most conversely prefer up-to-date images when they actually failover. So this post will focus on test recovery. Continue reading Point-in-time test recovery with SRDF and VMware SRM
Migrating a virtual machine that uses 100% virtual disks is a simple task due to VMware Storage vMotion but migrating a VM that uses Raw Device Mappings from one array to another is somewhat trickier. There are options to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs to be supported. Other options are host based mechanisms/in-guest mechanisms to copy the data from one device to another. That option sometimes can be complex and requires special drivers or even possibly downtime to fully finish the transition. To solve this issue for physical hosts, EMC introduced for the Symmetrix a feature called Federated Live Migration.
Federated Live Migration (FLM) allows the migration of a device from one Symmetrix array to another Symmetrix array without any downtime to the host and also does not affect the SCSI inquiry information of the original source device. Therefore, even though the device is now residing on a completely different Symmetrix array the host is none the wiser. FLM leverages Open Replicator functionality to migrate the data so it has some SAN requirements–the source array must be zoned to the target array. A FLM setup looks like the image below:
Continue reading Migrating a Raw Device Mapping with Federated Live Migration
One of the products or maybe rather solutions that I work a lot with is integrating Symmetrix Remote Data Facility (SRDF) with VMware’s vCenter Site Recovery Manager. For some shameless self-promotion (I suppose can probably drop that phrase when writing on this blog because by definition a blog is inherently self-promotion, but I digress) of the implementation guide I write it can be found here:
Continue reading Symmetrix SRDF and VMware vCenter SRM Implementation Checklist
First post! As I am fooling around with the templates and colors and such and getting used to blogging I figured I would kick things off with something simple and one of my favorite unheralded new features of Solutions Enabler (SYMCLI) 7.6 that was released at EMC World 2013: “quick meta creation”.
****UPDATE: Apparently this was enabled long before SE 7.6, SE 7.3 at least actually, so you probably already have this feature, thanks to Jason Moreland for pointing this out****
As anyone familiar with the VMAX is most likely aware Symmetrix logical devices have a size limit of 240 GB. And in most virtual environments the size of clustered file systems that are desired, such as VMFS, usually need to be much bigger than that. So the solution on the VMAX array is to create what we call a metavolume (which I will refer to as a meta henceforth because I am a lazy typist). This is a simple logical association of multiple VMAX devices and are manipulated to look like one larger device which allows the size of a device as seen by the host to be VERY large (255 total members possible x 240 GB size each–you do the math). These devices can be “connected” together either via concatenation or via striping.
Well of course this is old news, why is this the least bit interesting?
Continue reading First post and a quick SYMCLI shortcut