Setting FlashArray Multipathing Best Practices with ESXi Host Profiles

If my past posts are any indicator, there are a million ways to set/change/manage ESXi settings. Direct configuration (CLI or GUI) PowerCLI etc. One option I often overlook is host profiles. This has came up a few times in the past month so I thought I would visit this and do a quick walkthrough on configuring Pure Storage FlashArray multipathing best practices with host profiles.

Continue reading “Setting FlashArray Multipathing Best Practices with ESXi Host Profiles”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”

Updated Pure Storage Content Pack for vRealize Log Insight 2.5

VMware vRealize Log Insight is a product I have been quite fond of since it first came out–I liked it for a variety of reasons–one is the simplicity of use. As far as VMware’s entire management suite, it is the easiest to install/configure and understand how to use. You can really become an accomplished user in a day. Anyways, I finally got around to updating the Pure Storage FlashArray Content Pack to expand support for version 2.5 and also leverage some new functionality from Purity syslog messages.

titleimage

 

Continue reading “Updated Pure Storage Content Pack for vRealize Log Insight 2.5”

Updated VMware and Pure Storage Best Practices Guide

Update: Please see this page for latest updates on best practices and relevant links.

Quick post here. I have updated the Pure Storage FlashArray Best Practices Guide for VMware vSphere. Not a total overhaul but there are some changes to note.

cover

Updates include:

  • New information for vSphere 6.0 This mostly focuses on what supports vSphere 6.0 and re-enforcing that current best practices remain the same. Expect a lot more vSphere 6 content though in forthcoming updates. As new storage features are tested and considered in the latest version of the VMware platfom they will be included in this guide, such as VVols.
  • Queue Depth changes are no longer mentioned in this document. Messing with this is considered a tweak that most people will not need. Don’t broke what isn’t broken is the mantra here.
  • More instruction on iSCSI setup and clarified instruction.
  • General tightening and simplification of the document
  • New content pack for Log Insight (which will be out soon). The changes are detailed in the document

 

Required ESXi permissions for UNMAP through PowerCLI

I received a question recently on another UNMAP post what are the minimum permissions required to run UNMAP with PowerCLI and finally got around to looking into it. Turns out it is very straight forward. If you run it with a read-only account–it will fail. Since it is creating a file and making changes some configuration authority is required. Running as read only will look like this:

failedunmap

Continue reading “Required ESXi permissions for UNMAP through PowerCLI”

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

I jumped on a call the other day to talk about iSCSI setup for a new FlashArray and the main reason for the discussion had to do with co-existence of a pre-existing array from another vendor. They were following my blog post on iSCSI setup and things didn’t quite match up.

To setup multi-pathing (the recommended way) for Software iSCSI is to configure more than one vmkernel port that each have exactly one active host adapter (physical NIC). You then add those vmkernel ports to the iSCSI software adapter and the iSCSI adapter will then use those specific NICs for I/O transmission and load-balance across those ports.

Continue reading “Another look at ESXi iSCSI Multipathing (or a Lack Thereof)”

PowerCLI Script to Create VMware Clusters on the FlashArray

The first step prior to provisioning storage on the FlashArray is to actually create the host records on the array itself. These records, in their most basic form, are just names of hosts with associated WWNs or IQNs. Pretty simple process in general, but as these VMware clusters get larger and larger (vSphere 6.0 now allows 64 hosts per cluster) scripting this configuration becomes a bit more appealing. Granted, this is a one-off operation, but still saves you from a tedious task. So I wrote one. This script also integrates setting the best practices on the ESXi hosts in the cluster and in the case of iSCSI adds the FlashArray iSCSI ports as a target on the host software iSCSI adapters.

intro

Continue reading “PowerCLI Script to Create VMware Clusters on the FlashArray”

Setting up iSCSI with VMware ESXi and the FlashArray

I’ve been with Pure Storage for about ten months (time flies!) and a noticeable trend I’ve seen in the past six or so months is in the number of customers who are deciding to use iSCSI as their storage protocol of choice. This is increasingly common in greenfield environments where they don’t want to invest in a Fibre Channel infrastructure. I’ve helped quite a few set this up in VMware environments so I thought I would put a post together on configuring ESXi software iSCSI with the Pure Storage FlashArray (I have yet to see a hardware iSCSI setup).

Before I begin, I highly recommend reading the following two documents from VMware:

http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

They are not long and provide very good insight into the how/what/why of iSCSI on VMware. Some of the images are a bit old, but the underlying concepts have not changed. Continue reading “Setting up iSCSI with VMware ESXi and the FlashArray”

FlashArray XCOPY Data Reduction Reporting Enhancement

Recently the Purity Operating Environment 4.1.1 release  came out with quite a few enhancements. Many of these were for replication, certain new GUI features and the new vSphere Web Client Plugin is included. What I wanted to talk about here is a space reporting enhancement that was made concerning VAAI XCOPY (Full Copy). First some history…

First off a quick refresher on XCOPY. XCOPY is a VAAI feature that provides for offloading virtual disk copying/migration inside of one array. So operations like Storage vMotion, Cloning or Deploy from Template. Telling an array to move something from location A to location B is much faster than having ESXi issue tons of reads and writes over the SAN and it also therefore reduces CPU overhead on the ESXi host and reduces traffic on the SAN. Faster cloning/migration and less overhead–yay! This lets ESXi focus on what it does best: manage and run virtual machines while letting the array do what it does best: manage and move around data. Continue reading “FlashArray XCOPY Data Reduction Reporting Enhancement”