VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS

This is part 1 of this 7 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this. Let’s start with what an unresolved VMFS is and how to mount it.

The series being:

  1. Mounting an unresolved VMFS
  2. Why not force mount?
  3. Why might a VMFS resignature operation fail?
  4. How to correlate a VMFS and a FlashArray volume
  5. How to snapshot a VMFS on the FlashArray
  6. How to mount a VMFS FlashArray snapshot
  7. Restoring a single VM from a FlashArray snapshot

Continue reading “VMFS Snapshots and the FlashArray Part I: Mounting an unresolved VMFS”

FlashArray and VMware documentation update for vSphere 6

I have completed updates for two of my main VMware vSphere documents for the Pure Storage FlashArray. These include the standard best practices document and the white paper explaining VAAI in detail and how it works on the FlashArray.

vmwarebpvaai

 

 

 

 

 

 

 

 

Best Practices Document Link

VAAI White Paper Link

The best practices document has mainly been updated with information that this blog has shown in the past couple of months. Notably:

  • vSphere 6 updates, support for Web Client Plugin versions, changes in virtual disk recommendations, in-guest UNMAP support, etc
  • VMFS UNMAP changes when it comes to best practice recommendations
  • vRealize Operations Management Pack
  • EFI-enabled VMs and Disk.DiskMaxIOSize

In the VAAI document, it is a similar update:

  • vSphere 6 changes, mainly focused on the thin virtual disk XCOPY enhancements
  • UNMAP changes, block counts, performance and in-guest support (EnableBlockDelete)

Both documents are also updated for FlashArray//m, but it is mainly a cosmetic change as nothing really changes for the VMware environment, no recommendations are changed. Of course the documents are also cleaned up and re-arranged to be more reader friendly with a semi-new format as well.

Important! If you have old versions of these documents, delete them! These get updated frequently (a few times a year at least) and these changes can be important. When needing to refer to the guides, please check back to the Pure Storage community for the latest version.

Enjoy! As always feedback on these documents is ALWAYS welcome.

 

UNMAP Block Count Behavior Change in ESXi 5.5 P3+

I recently was doing some troubleshooting for a customer that was using my UNMAP PowerCLI script and discovered a change in ESXi 5.5+ UNMAP. The issue was that the script was taking quite a while to complete. After some logic optimizations and increasing timeouts the script was sped up a bit and less timeout errors occurred, but a bunch of the UNMAP operations were still taking a lot longer than expected. Eventually we threw our hands up and said it was good enough. A bit more recently, I was testing a 3rd party UNMAP tool and ran into similar behavior so I dug into it a bit more and found some semi-unexpected changes in how UNMAP works, specifically the behavior when leveraging non-default block iteration counts. Continue reading “UNMAP Block Count Behavior Change in ESXi 5.5 P3+”

Changing an ESXi SATP Rule

One of the few hard requirements we make to configure best practices on ESXi for the FlashArray is to create a SATP rule. A SATP rule simply describes a certain configuration (mainly around multipathing) for a specific set of devices (usually devices from an array). For the FlashArray, this rule consists of making sure devices are using Round Robin and an I/O operations limit of 1.

esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V PURE -M FlashArray -P “VMW_PSP_RR” -O iops=1 -e “FlashArray SATP Rule”

Continue reading “Changing an ESXi SATP Rule”

Add Storage Wizard Slowness and Unresolved VMFS Volumes

This week I received a question from a customer about some slowness in the vSphere “Add Storage” wizard they were seeing. This is a problem that has occurred over the years quite a few times for a variety of different reasons. VMware has fixed most of them, this latest reason luckily was known and has a relatively simple solution. An option called VMFS.UnresolvedVolumeLiveCheck.

option

Continue reading “Add Storage Wizard Slowness and Unresolved VMFS Volumes”

Updated VMware and Pure Storage Best Practices Guide

Update: Please see this page for latest updates on best practices and relevant links.

Quick post here. I have updated the Pure Storage FlashArray Best Practices Guide for VMware vSphere. Not a total overhaul but there are some changes to note.

cover

Updates include:

  • New information for vSphere 6.0 This mostly focuses on what supports vSphere 6.0 and re-enforcing that current best practices remain the same. Expect a lot more vSphere 6 content though in forthcoming updates. As new storage features are tested and considered in the latest version of the VMware platfom they will be included in this guide, such as VVols.
  • Queue Depth changes are no longer mentioned in this document. Messing with this is considered a tweak that most people will not need. Don’t broke what isn’t broken is the mantra here.
  • More instruction on iSCSI setup and clarified instruction.
  • General tightening and simplification of the document
  • New content pack for Log Insight (which will be out soon). The changes are detailed in the document

 

Direct Guest OS UNMAP in vSphere 6.0

This is certainly not my first post about UNMAP and I am pretty sure it will not be my last, but I think this is one of the more interesting updates of late. vSphere 6.0 has a new feature that supports the ability for direct UNMAP operations from inside a virtual machine issued from a Guest OS. Importantly this is now supported using a virtual disk instead of the traditional requirement of a raw device mapping.

Continue reading “Direct Guest OS UNMAP in vSphere 6.0”

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

I jumped on a call the other day to talk about iSCSI setup for a new FlashArray and the main reason for the discussion had to do with co-existence of a pre-existing array from another vendor. They were following my blog post on iSCSI setup and things didn’t quite match up.

To setup multi-pathing (the recommended way) for Software iSCSI is to configure more than one vmkernel port that each have exactly one active host adapter (physical NIC). You then add those vmkernel ports to the iSCSI software adapter and the iSCSI adapter will then use those specific NICs for I/O transmission and load-balance across those ports.

Continue reading “Another look at ESXi iSCSI Multipathing (or a Lack Thereof)”

Setting up iSCSI with VMware ESXi and the FlashArray

I’ve been with Pure Storage for about ten months (time flies!) and a noticeable trend I’ve seen in the past six or so months is in the number of customers who are deciding to use iSCSI as their storage protocol of choice. This is increasingly common in greenfield environments where they don’t want to invest in a Fibre Channel infrastructure. I’ve helped quite a few set this up in VMware environments so I thought I would put a post together on configuring ESXi software iSCSI with the Pure Storage FlashArray (I have yet to see a hardware iSCSI setup).

Before I begin, I highly recommend reading the following two documents from VMware:

http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

They are not long and provide very good insight into the how/what/why of iSCSI on VMware. Some of the images are a bit old, but the underlying concepts have not changed. Continue reading “Setting up iSCSI with VMware ESXi and the FlashArray”

ESXi IO Operations Limit Parameter and IO Balance

Quick post here. I am working on updating some documentation and I wanted to add a bit more color to a section on changing the IO Operations limit for ESXi NMP Round Robin devices. The Pure Storage recommendation is to change this value to one from the default of 1,000. Therefore, ESXi will switch logical paths after each I/O instead of 1,000. There are some performance benefits to this and some evidence for improved failover time (in the case of a path failure) with this setting. I am not going to get into the veracity of these benefits right now. What I wanted to share here is that there is no doubt changing this to 1 makes a big difference to I/O balance on the array itself. Continue reading “ESXi IO Operations Limit Parameter and IO Balance”