This doesn’t come up very often these days, but every once and awhile it does and every time it does, I look to see if we have documentation on it and there never seems to be. After writing this post I did find a forum post where my friend Drew answers it there too. Well anyways let’s quickly explain the situation.
Most block vendors these days tell customers to change their path switching policy for their storage in ESXi from the default of Round Robin (1,000) to 1. This makes ESXi switches logical paths for a given device after every I/O instead of every 1,000. The reason I say this doesn’t come up much anymore is that in modern version of ESXi (6.0 express patch+, 6.5 U1+ and 6.7+) we (Pure) have rules in ESXi that makes sure this is set by default without any user configuration. Many other vendors do as well.
Anyways, when using VMware tools to see if a device is configured properly, depending on how it is set, it can readout differently.
Continue reading Demystifying IO Operation Readouts in ESXi
Yesterday, I wrote a post introducing the new latency-based round robin multipathing policy in ESXi 6.7 Update 1. You can check that out here:
Latency Round Robin PSP in ESXi 6.7 Update 1
In normal scenarios, you may not see much of a performance difference between the standard IOPS switching-based policy and the latency one. So don’t necessarily expect that switching policies will change anything. But then again, multipathing primarily exists not for healthy states, but instead exists to protect during times of poor health. Continue reading Latency-based PSP in ESXi 6.7 Update 1: A test drive
This is my first (but certainly not last post) on the new path selection policy option in vSphere 6.7 Update 1. In reality, this option was introduced in the initial release of 6.7, but it was not officially supported until update 1.
So what is it? Well first off, see the official words from my colleague Jason Massae at VMware here:
Why was this PSP option introduced? Well the most common path selection policy is the NMP Round Robin. This is VMware’s built-in path selection policy for arrays that offer multiple paths. Round Robin was a great way to leverage the full performance of your array by actively using all of the paths simultaneously. Well…almost simultaneously.
Continue reading Latency Round Robin PSP in ESXi 6.7 Update 1
One of the few hard requirements we make to configure best practices on ESXi for the FlashArray is to create a SATP rule. A SATP rule simply describes a certain configuration (mainly around multipathing) for a specific set of devices (usually devices from an array). For the FlashArray, this rule consists of making sure devices are using Round Robin and an I/O operations limit of 1.
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V PURE -M FlashArray -P “VMW_PSP_RR” -O iops=1 -e “FlashArray SATP Rule”
Continue reading Changing an ESXi SATP Rule
In the latest GA release of Purity, version 4.1.5, there have been some nice improvements in how we handle host connectivity/balance reporting. There is a new CLI command to monitor the balance of I/O from a host standpoint as well as how we report/display host connectivity in the FlashArray web GUI. Let’s take a look at these enhancements. In Part 1, I will talk about the CLI enhancement.
Continue reading Host Connectivity Reporting Changes and IO Balance: Part 1
If my past posts are any indicator, there are a million ways to set/change/manage ESXi settings. Direct configuration (CLI or GUI) PowerCLI etc. One option I often overlook is host profiles. This has came up a few times in the past month so I thought I would visit this and do a quick walkthrough on configuring Pure Storage FlashArray multipathing best practices with host profiles.
Continue reading Setting FlashArray Multipathing Best Practices with ESXi Host Profiles
Quick post here. I am working on updating some documentation and I wanted to add a bit more color to a section on changing the IO Operations limit for ESXi NMP Round Robin devices. The Pure Storage recommendation is to change this value to one from the default of 1,000. Therefore, ESXi will switch logical paths after each I/O instead of 1,000. There are some performance benefits to this and some evidence for improved failover time (in the case of a path failure) with this setting. I am not going to get into the veracity of these benefits right now. What I wanted to share here is that there is no doubt changing this to 1 makes a big difference to I/O balance on the array itself. Continue reading ESXi IO Operations Limit Parameter and IO Balance
This is a post I plan on just updating on a rolling basis. I have been working on updating the vSphere and Pure Storage Best Practices document and there are few settings that can be tweaked to increase performance. A common question I have and occasionally receive is can this be easily simplified or automated? Of course! And PowerCLI is the best option in most cases–I will continue to add to this post or update it as I find newer or better ways of doing things.
****UPDATED SCRIPTS AND NEW FUNCTIONALITY check out this blog post for insight****
Update: get my scripts on my GitHub page here:
Continue reading VMware PowerCLI and Pure Storage
This is a topic I have posted about in the past but this time I am going to speak about it with the Pure Storage FlashArray. Anyone familiar with the VMware Native Multipathing Plugin probably knows about the Round Robin “IOPS” value which I will interchangeably also refer to as the IO Operation Limit. This value dictates how often NMP switches paths to the device–after a configured number of I/Os NMP will move to a different path. The default value of this is 1,000 but can be changed to as low as 1. For the highest performance Pure recommends changing this setting to 1 for all devices. The tricky thing is that it has to be done for every device on every host and doing this in a simple way isn’t immediately obvious. But here is the procedure.
Continue reading Changing the default VMware Round Robin IO Operation Limit value for Pure Storage FlashArray devices