It’s been a bit but I’m (Drew Tonnesen, email@example.com) back for another guest spot on Cody’s blog. In August 2013 I wrote about the VMAX Content Pack for VMware’s newly released Log Insight product (v1.0). At the time of release VMware was pushing hard for content packs and rode me pretty ragged for 2 weeks to produce one. As a result much of the detail around items like widgets was minimal. This was perfectly acceptable as VMware was also in the nascent stages of the product and content packs. Now with VMware’s release of Log Insight 1.5, they have made some important changes, particularly to the content pack component, which has allowed me to expand our VMAX Content Pack. As I’m only going to cover the new features of the content pack and how it relates to Log Insight v1.5, if you want the full Monty on Log Insight v1.5, VMware’s lead developer goes into detail here. Another great resource for everything Log Insight is Steve Flander’s blog: http://sflanders.net/ . Steve does a great job explaining the new features and how to use them as well as answering any questions.
In my last blog post I wrote about how to provision a new volume from ScaleIO to your VMware environment so the next logical step is what do you do when that volume is completely consumed. Well you have to options; provision a new volume or expand an existing one. Since the former option was covered in my last post, let’s look at the second option.
VMware vSphere has offered the ability to dynamically expand VMFS volumes since, well, vSphere was introduced (version 4.0). VMFS Volume Grow allows ESXi to recognize when a physical device has expanded in capacity and enables an administrator to non-disruptively expand the VMFS volume to take advantage of the extra space without resorting to using messy extents.
I recently posted about adding capacity to a ScaleIO storage pool, so the next logical step is provisioning a new volume. In this post, I am going to cover the straight-forward act of creating a new volume from a storage pool, mapping it to a ScaleIO Data Client (SDC) and then presenting it to the VMware cluster.
The first step is to assure we have enough space to configure a new volume of the size we desire. GUI or CLI will suffice:
When initially installing/configuring ScaleIO in a VMware environment the creation of a storage pool and adding capacity to it is included in the setup process. Obviously every time you want to add a storage pool, add capacity or simply create a new volume you don’t want to have to run the setup process again–that would be silly. And of course you do not have to, nor should you. So how do you add more capacity without adding additional nodes? Let’s find out!
My current environment has four ESXi hosts and one SDS/SDC VM per host (my SDCs and SDSs are the same VM in my environment). Each SDS currently has one virtual disk using the full capacity of a VMFS on top of a physical disk. The plan is to double the capacity of each SDS by adding a new physical disk to each ESXi host and presenting the full capacity (minus the space on the disk reserved for VMFS metadata) via a virtual disk to each SDS. The below image shows the current environment for one ESXi host and also how it will look after the capacity is added.
Today EMC posted the updated SRDF Storage Replication Adapter (SRA) 5.5 for Symmetrix VMAX arrays to their website:
It will be on VMware’s site shortly:
This adapter includes support for VMware vCenter Site Recovery Manager 5.5 (as well as “legacy” support for SRM 5.1).
I’m (Drew Tonnesen, @drewtonnesen) back for another guest post, this time continuing Cody’s theme of VSI 5.6. Besides the much anticipated (and long awaited) striped meta capability in Unified Storage Management (USM) 5.6 for VMAX, there is now VPLEX provisioning! For those of us who use VPLEX with VMware this simplifies the creation of datastores on VPLEX.
A common recommendation from storage vendors is to change the default IOPS setting for VMwares’ Native Multi-Pathing (NMP) Path Selection Policy (PSP) Round Robin. The IOPS setting controls how many I/Os are sent down a single logical path before switching to the next path. By default this number is 1,000 I/Os. The VMAX recommendation is to set this to 1. The purpose of this blog post is not to debate the setting, but to help those who want to use it. Regardless, I have seen many customers benefit from this recommendation. Once they see a benefit they want to know–can I make this setting a default?
Well, as I (Drew Tonnesen, firstname.lastname@example.org) mentioned I would soon be publishing a VMAX Content Pack for Log Insight and I have. It can be found here in the VMware Cloud Marketplace:
All the information that you need to use the VMAX content pack is included in the download so I’ll keep this blog entry short. There is also a demo you can view to get an overview of the capabilities of the content pack.
Migrating a virtual machine that uses 100% virtual disks is a simple task due to VMware Storage vMotion but migrating a VM that uses Raw Device Mappings from one array to another is somewhat trickier. There are options to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs to be supported. Other options are host based mechanisms/in-guest mechanisms to copy the data from one device to another. That option sometimes can be complex and requires special drivers or even possibly downtime to fully finish the transition. To solve this issue for physical hosts, EMC introduced for the Symmetrix a feature called Federated Live Migration.
Federated Live Migration (FLM) allows the migration of a device from one Symmetrix array to another Symmetrix array without any downtime to the host and also does not affect the SCSI inquiry information of the original source device. Therefore, even though the device is now residing on a completely different Symmetrix array the host is none the wiser. FLM leverages Open Replicator functionality to migrate the data so it has some SAN requirements–the source array must be zoned to the target array. A FLM setup looks like the image below: