Updated VMware and Pure Storage Best Practices Guide

Update: Please see this page for latest updates on best practices and relevant links.

Quick post here. I have updated the Pure Storage FlashArray Best Practices Guide for VMware vSphere. Not a total overhaul but there are some changes to note.

cover

Updates include:

  • New information for vSphere 6.0 This mostly focuses on what supports vSphere 6.0 and re-enforcing that current best practices remain the same. Expect a lot more vSphere 6 content though in forthcoming updates. As new storage features are tested and considered in the latest version of the VMware platfom they will be included in this guide, such as VVols.
  • Queue Depth changes are no longer mentioned in this document. Messing with this is considered a tweak that most people will not need. Don’t broke what isn’t broken is the mantra here.
  • More instruction on iSCSI setup and clarified instruction.
  • General tightening and simplification of the document
  • New content pack for Log Insight (which will be out soon). The changes are detailed in the document

 

Direct Guest OS UNMAP in vSphere 6.0

This is certainly not my first post about UNMAP and I am pretty sure it will not be my last, but I think this is one of the more interesting updates of late. vSphere 6.0 has a new feature that supports the ability for direct UNMAP operations from inside a virtual machine issued from a Guest OS. Importantly this is now supported using a virtual disk instead of the traditional requirement of a raw device mapping.

Continue reading “Direct Guest OS UNMAP in vSphere 6.0”

ESXi IO Operations Limit Parameter and IO Balance

Quick post here. I am working on updating some documentation and I wanted to add a bit more color to a section on changing the IO Operations limit for ESXi NMP Round Robin devices. The Pure Storage recommendation is to change this value to one from the default of 1,000. Therefore, ESXi will switch logical paths after each I/O instead of 1,000. There are some performance benefits to this and some evidence for improved failover time (in the case of a path failure) with this setting. I am not going to get into the veracity of these benefits right now. What I wanted to share here is that there is no doubt changing this to 1 makes a big difference to I/O balance on the array itself. Continue reading “ESXi IO Operations Limit Parameter and IO Balance”

Pure Storage and VMware VAAI

Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.

I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.


vaai_pdf_cover

Continue reading “Pure Storage and VMware VAAI”

Provisioning a new ScaleIO volume in a VMware environment

I recently posted about adding capacity to a ScaleIO storage pool, so the next logical step is provisioning a new volume. In this post, I am going to cover the straight-forward act of creating a new volume from a storage pool, mapping it to a ScaleIO Data Client (SDC) and then presenting it to the VMware cluster.

scaleio_arch

The first step is to assure we have enough space to configure a new volume of the size we desire. GUI or CLI will suffice:

gui_new_Capacity

Continue reading “Provisioning a new ScaleIO volume in a VMware environment”

Virtual Storage Integrator 5.6: VPLEX Provisioning

I’m (Drew Tonnesen, @drewtonnesen) back for another guest post, this time continuing Cody’s theme of VSI 5.6.  Besides the much anticipated (and long awaited) striped meta capability in Unified Storage Management (USM) 5.6 for VMAX, there is now VPLEX provisioning!  For those of us who use VPLEX with VMware this simplifies the creation of datastores on VPLEX.

Continue reading “Virtual Storage Integrator 5.6: VPLEX Provisioning”

Changing the default IOPS value for VMware Round Robin and Symmetrix Devices

A common recommendation from storage vendors is to change the default IOPS setting for VMwares’ Native Multi-Pathing (NMP) Path Selection Policy (PSP) Round Robin. The IOPS setting controls how many I/Os are sent down a single logical path before switching to the next path. By default this number is 1,000 I/Os. The VMAX recommendation is to set this to 1. The purpose of this blog post is not to debate the setting, but to help those who want to use it. Regardless, I have seen many customers benefit from this recommendation. Once they see a benefit they want to know–can I make this setting a default?

Continue reading “Changing the default IOPS value for VMware Round Robin and Symmetrix Devices”