The Pure Storage FlashArray vROPs Adapter v1

The Pure Storage Management Pack for VMware vRealize Operations Manager version 1 is now out! Download it here. This is the latest in our aggressive 2015 roadmap of VMware management integration, whether that be integration point that are new or updated.

vorpstitle

So first, what is a management pack? A management pack is a plugin of sorts that can be installed into vRealize Operations Manager (vROPs) that provides context and relationships to existing objects inside vROPs. How these objects are related depends on what the pack represents. In the case of Pure Storage, the pack relates VMware objects, such as VMs and datastore to volumes on a particular FlashArray. This in addition to FlashArray host groups and hosts. Continue reading “The Pure Storage FlashArray vROPs Adapter v1”

Reclaiming Windows Update Space in Win 7

Quick post here. I have been working with some customers lately to work on reclaiming guest space inside of their Windows 7 desktops in a VDI environment and for the most part worked through the standard procedures. Removing files in temp folders, ensuring the recycling bin was empty and then running sDelete to reclaim the space.There was still a build up of space though. Now mind you this is a persistent desktop, so how much this matters and the like changes if the desktops are linked clones or use SE Sparse. Continue reading “Reclaiming Windows Update Space in Win 7”

FlashArray //m and VMware Integration–What do you need to know?

Last week Pure Storage introduced the latest iteration in the FlashArray product line: the FlashArray //m. While Pure Storage has traditionally focused on software innovation from a technical standpoint, we decided that the only way to stay ahead of (and lead) the curve was to innovate in the hardware realm as well. Therefore, for the last few years, development on producing a hardware platform that could keep up with compute and storage speed and capacity leaps has been at full tilt. This produced the brand new FlashArray //m.

product--flash-array-m--front-bezel-angled-left--shadow2

Continue reading “FlashArray //m and VMware Integration–What do you need to know?”

Querying SRM for Protected VMs with PowerCLI

I was recently asked how to query SRM for protected VMs and I decided it would make a good quick blog post. There is a great post here on using PowerCLI with SRM, but it doesn’t show the information to return per virtual machine information by default. Needs a bit more.

All it returns is a SRM-based virtual machine ID which doesn’t relate to what a user is probably looking for (a virtual machine name). So it needs a few more simple steps. The following script which can be found on my GitHub page here that does the following things:

  1. Connects to a vCenter
  2. Connects to SRM
  3. Creates a log folder with a time stamp in the name
  4. Iterates through each Protection Group
  5. Logs every virtual machine in that protection group

Continue reading “Querying SRM for Protected VMs with PowerCLI”

Site Recovery Manager 6 and Storage DRS Tagging: Part I–The Basics

VMware vCenter Site Recovery Manager 6.0 was mostly a compatibility release–getting it to work right with vCenter 6.0 essentially. That being said, there were a few new features (and some nice tweaks in the GUI) included in the release. One of the new features that sparked my interest was SRM and Storage DRS compatibility enhancements.

Ben Meadowcroft a VMware PM who works on amongst other things, SRM, blogged about this new feature here. Find the VMware KB here.

srmspash

Ben covers most of the history of this in his post so I will skip over that. Let’s take a look though a little closer at this functionality. So to overview there are three tags that SRM introduces to a datastore:

  • SRM-com.vmware.vcDr:::status (indicates that the datastore is replicated)
  • SRM-com.vmware.vcDr:::consistencyGroup (indicates what CG the datastore belongs to, if any)
  • SRM-com.vmware.vcDr:::protectionGroup (indicates what PG the datastore belongs to, if any)

Replication status is assigned as soon as SRM (and it’s respective Storage Replication Adapter) discovers it to be replicated through a Device Discovery operation. Upon this discovery a consistency group tag is also assigned. If the volume is not advertised by the SRA as being in a consistency group a unique one will be created for that volume–basically indicating it is in its own consistency group.

nopg

npogsrm

A protection tag is not assigned until the volume is actually added to a protection group. Once the datastore is assigned to a protection group it will receive the tag (remember a volume can only be in one PG and SRM only supports being in one CG so there will always only be one to assign).

yespg

So what do these tags do? Well Storage DRS will note these tags and not make any automatic moves if a Storage vMotion would violate any of them, this means it will not move from one datastore to another if:

1) Source datastore is replicated and target is not

tononreplicated

2) Source datastore is NOT replicated and target is

toreplicatedfromnon

3) Source datastore is in a different consistency group than the target

differentCG

4) Source datastore is replicated AND in a protection group but target is replicated but NOT in an protection group

notinPG

Basically Storage DRS will not move a VM from one datastore to another if it deems it to cause a change in the configuration of the protection group or consistency of a virtual machine.

So automatic Storage DRS will never make these moves. It may suggest them if it cannot find a better option, but it will never make a move that will violate these rules. If for some reason you want this to occur you can always override the warning and execute the operation.

overridesdrs

Let’s take a look now at the relevant configurable behavior in SRM.

There are four options:

Setting Name Description and Default Value
storage.enableSdrsStandardTagCategoryCreation This creates the three tag categories in vCenter for you.
storage.enableSdrsTagging This actually applies the tags to the datastores when discovered etc.
storage.enableSdrsTaggingRepair This allows SRM to fix datastore tag when something has changed (PG/CG membership changes for instance).
storage.sdrsTaggingPollInterval How often SRM checks tags to make sure they are accurate.

srmsdrsoptions

All of these options are enabled by default, well, kinda, the last one is just set to 50 seconds.

So like the table says the enableSdrsStandardTagCategoryCreation option is pretty straight forward. Creates the three categories. You can, of course, create them yourself if you choose to, not sure why you would though with the exception of the reason stated in the option description:

“In Federated SSO setups, this flag should be disabled and the tags and tag categories should be manually created.”

When enableSdrsTagging is enabled, SRM will place the correct tags at the appropriate times. So when a new device is discovered or its protection group membership changes.

The option enableSdrsTaggingRepair is a little more to think about. New tags will still be placed on datastores, replicated/cg tags during device discovery, pg tags upon adding it to a new or different pg. But it will not fix or remove them, if you remove it from a PG or delete the PG, the tag will remain. If you delete the SRM provided tag and replace it with you own, it will not fix it. Though if you add it to a new PG it will remove an old one if it exists and then give it the correct one. But it won’t ever do that unless you make that PG change.

A note about the repair functionality. If you decide to delete a SRM-provided tag and make you own, it will not last long if this feature is enabled. SRM will right things quite quickly (50 seconds or less). So if you want more control over this tagging for SRM-related devices, disabling this is an option. Of course disabling this can easily lead to stale information in the tags, so do so at your own risk.

In general, I think this is a great enhancement. I would like to see more granular control from the SRM side of things (enable/disable CG auto-tagging when a CG doesn’t exist for that device for instance. This also should have a play in non-SRM environments, it’s just a bit more work because you have to do the tagging yourself.

In Part II, I will take a look at how this works with the FlashArray SRA and what’s involved in that.

Join Pure Storage on June 1st for the future of Flash!

Non-technical post here. Some information though for an upcoming event:

Beginning June 1st, Pure Storage will be hosting events in 50+ cities all over the world to make some exciting announcements concerning the future of flash in the datacenter as well as a sweet new product launch. We’ll be talking about a vision that is transforming the storage industry and more. I highly recommend registering for one in a city near you!

cropped-transformation2-header1.jpg

Continue reading “Join Pure Storage on June 1st for the future of Flash!”

Add Storage Wizard Slowness and Unresolved VMFS Volumes

This week I received a question from a customer about some slowness in the vSphere “Add Storage” wizard they were seeing. This is a problem that has occurred over the years quite a few times for a variety of different reasons. VMware has fixed most of them, this latest reason luckily was known and has a relatively simple solution. An option called VMFS.UnresolvedVolumeLiveCheck.

option

Continue reading “Add Storage Wizard Slowness and Unresolved VMFS Volumes”

Setting FlashArray Multipathing Best Practices with ESXi Host Profiles

If my past posts are any indicator, there are a million ways to set/change/manage ESXi settings. Direct configuration (CLI or GUI) PowerCLI etc. One option I often overlook is host profiles. This has came up a few times in the past month so I thought I would visit this and do a quick walkthrough on configuring Pure Storage FlashArray multipathing best practices with host profiles.

Continue reading “Setting FlashArray Multipathing Best Practices with ESXi Host Profiles”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”