Host Connectivity Reporting Changes and IO Balance: Part 1

In the latest GA release of Purity, version 4.1.5, there have been some nice improvements in how we handle host connectivity/balance reporting. There is a new CLI command to monitor the balance of I/O from a host standpoint as well as how we report/display host connectivity in the FlashArray web GUI. Let’s take a look at these enhancements. In Part 1, I will talk about the CLI enhancement.

intro

Continue reading “Host Connectivity Reporting Changes and IO Balance: Part 1”

The Pure Storage FlashArray vROPs Adapter v1

The Pure Storage Management Pack for VMware vRealize Operations Manager version 1 is now out! Download it here. This is the latest in our aggressive 2015 roadmap of VMware management integration, whether that be integration point that are new or updated.

vorpstitle

So first, what is a management pack? A management pack is a plugin of sorts that can be installed into vRealize Operations Manager (vROPs) that provides context and relationships to existing objects inside vROPs. How these objects are related depends on what the pack represents. In the case of Pure Storage, the pack relates VMware objects, such as VMs and datastore to volumes on a particular FlashArray. This in addition to FlashArray host groups and hosts. Continue reading “The Pure Storage FlashArray vROPs Adapter v1”

FlashArray //m and VMware Integration–What do you need to know?

Last week Pure Storage introduced the latest iteration in the FlashArray product line: the FlashArray //m. While Pure Storage has traditionally focused on software innovation from a technical standpoint, we decided that the only way to stay ahead of (and lead) the curve was to innovate in the hardware realm as well. Therefore, for the last few years, development on producing a hardware platform that could keep up with compute and storage speed and capacity leaps has been at full tilt. This produced the brand new FlashArray //m.

product--flash-array-m--front-bezel-angled-left--shadow2

Continue reading “FlashArray //m and VMware Integration–What do you need to know?”

Querying SRM for Protected VMs with PowerCLI

I was recently asked how to query SRM for protected VMs and I decided it would make a good quick blog post. There is a great post here on using PowerCLI with SRM, but it doesn’t show the information to return per virtual machine information by default. Needs a bit more.

All it returns is a SRM-based virtual machine ID which doesn’t relate to what a user is probably looking for (a virtual machine name). So it needs a few more simple steps. The following script which can be found on my GitHub page here that does the following things:

  1. Connects to a vCenter
  2. Connects to SRM
  3. Creates a log folder with a time stamp in the name
  4. Iterates through each Protection Group
  5. Logs every virtual machine in that protection group

Continue reading “Querying SRM for Protected VMs with PowerCLI”

Site Recovery Manager 6 and Storage DRS Tagging: Part I–The Basics

VMware vCenter Site Recovery Manager 6.0 was mostly a compatibility release–getting it to work right with vCenter 6.0 essentially. That being said, there were a few new features (and some nice tweaks in the GUI) included in the release. One of the new features that sparked my interest was SRM and Storage DRS compatibility enhancements.

Ben Meadowcroft a VMware PM who works on amongst other things, SRM, blogged about this new feature here. Find the VMware KB here.

srmspash

Ben covers most of the history of this in his post so I will skip over that. Let’s take a look though a little closer at this functionality. So to overview there are three tags that SRM introduces to a datastore:

  • SRM-com.vmware.vcDr:::status (indicates that the datastore is replicated)
  • SRM-com.vmware.vcDr:::consistencyGroup (indicates what CG the datastore belongs to, if any)
  • SRM-com.vmware.vcDr:::protectionGroup (indicates what PG the datastore belongs to, if any)

Replication status is assigned as soon as SRM (and it’s respective Storage Replication Adapter) discovers it to be replicated through a Device Discovery operation. Upon this discovery a consistency group tag is also assigned. If the volume is not advertised by the SRA as being in a consistency group a unique one will be created for that volume–basically indicating it is in its own consistency group.

nopg

npogsrm

A protection tag is not assigned until the volume is actually added to a protection group. Once the datastore is assigned to a protection group it will receive the tag (remember a volume can only be in one PG and SRM only supports being in one CG so there will always only be one to assign).

yespg

So what do these tags do? Well Storage DRS will note these tags and not make any automatic moves if a Storage vMotion would violate any of them, this means it will not move from one datastore to another if:

1) Source datastore is replicated and target is not

tononreplicated

2) Source datastore is NOT replicated and target is

toreplicatedfromnon

3) Source datastore is in a different consistency group than the target

differentCG

4) Source datastore is replicated AND in a protection group but target is replicated but NOT in an protection group

notinPG

Basically Storage DRS will not move a VM from one datastore to another if it deems it to cause a change in the configuration of the protection group or consistency of a virtual machine.

So automatic Storage DRS will never make these moves. It may suggest them if it cannot find a better option, but it will never make a move that will violate these rules. If for some reason you want this to occur you can always override the warning and execute the operation.

overridesdrs

Let’s take a look now at the relevant configurable behavior in SRM.

There are four options:

Setting Name Description and Default Value
storage.enableSdrsStandardTagCategoryCreation This creates the three tag categories in vCenter for you.
storage.enableSdrsTagging This actually applies the tags to the datastores when discovered etc.
storage.enableSdrsTaggingRepair This allows SRM to fix datastore tag when something has changed (PG/CG membership changes for instance).
storage.sdrsTaggingPollInterval How often SRM checks tags to make sure they are accurate.

srmsdrsoptions

All of these options are enabled by default, well, kinda, the last one is just set to 50 seconds.

So like the table says the enableSdrsStandardTagCategoryCreation option is pretty straight forward. Creates the three categories. You can, of course, create them yourself if you choose to, not sure why you would though with the exception of the reason stated in the option description:

“In Federated SSO setups, this flag should be disabled and the tags and tag categories should be manually created.”

When enableSdrsTagging is enabled, SRM will place the correct tags at the appropriate times. So when a new device is discovered or its protection group membership changes.

The option enableSdrsTaggingRepair is a little more to think about. New tags will still be placed on datastores, replicated/cg tags during device discovery, pg tags upon adding it to a new or different pg. But it will not fix or remove them, if you remove it from a PG or delete the PG, the tag will remain. If you delete the SRM provided tag and replace it with you own, it will not fix it. Though if you add it to a new PG it will remove an old one if it exists and then give it the correct one. But it won’t ever do that unless you make that PG change.

A note about the repair functionality. If you decide to delete a SRM-provided tag and make you own, it will not last long if this feature is enabled. SRM will right things quite quickly (50 seconds or less). So if you want more control over this tagging for SRM-related devices, disabling this is an option. Of course disabling this can easily lead to stale information in the tags, so do so at your own risk.

In general, I think this is a great enhancement. I would like to see more granular control from the SRM side of things (enable/disable CG auto-tagging when a CG doesn’t exist for that device for instance. This also should have a play in non-SRM environments, it’s just a bit more work because you have to do the tagging yourself.

In Part II, I will take a look at how this works with the FlashArray SRA and what’s involved in that.

Add Storage Wizard Slowness and Unresolved VMFS Volumes

This week I received a question from a customer about some slowness in the vSphere “Add Storage” wizard they were seeing. This is a problem that has occurred over the years quite a few times for a variety of different reasons. VMware has fixed most of them, this latest reason luckily was known and has a relatively simple solution. An option called VMFS.UnresolvedVolumeLiveCheck.

option

Continue reading “Add Storage Wizard Slowness and Unresolved VMFS Volumes”

Setting FlashArray Multipathing Best Practices with ESXi Host Profiles

If my past posts are any indicator, there are a million ways to set/change/manage ESXi settings. Direct configuration (CLI or GUI) PowerCLI etc. One option I often overlook is host profiles. This has came up a few times in the past month so I thought I would visit this and do a quick walkthrough on configuring Pure Storage FlashArray multipathing best practices with host profiles.

Continue reading “Setting FlashArray Multipathing Best Practices with ESXi Host Profiles”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”

Updated Pure Storage Content Pack for vRealize Log Insight 2.5

VMware vRealize Log Insight is a product I have been quite fond of since it first came out–I liked it for a variety of reasons–one is the simplicity of use. As far as VMware’s entire management suite, it is the easiest to install/configure and understand how to use. You can really become an accomplished user in a day. Anyways, I finally got around to updating the Pure Storage FlashArray Content Pack to expand support for version 2.5 and also leverage some new functionality from Purity syslog messages.

titleimage

 

Continue reading “Updated Pure Storage Content Pack for vRealize Log Insight 2.5”