I’ve done a few VMware Log Insight posts in the past year but I have yet to do one for Pure Storage. Log Insight is a product that I really love and VMware has been updating it like crazy since its initial release. Just recently they announced the 2.0 version of Log Insight (more info here). Besides just being functionally useful it is VERY easy to use–from kicking off the deployment (it is an OVA) to first use it takes about ten minutes maximum.
Scripting is a wonderful thing–saves me tons of time. PowerShell is no exception. VMware offers a very robust PowerShell cmdlet offering (called PowerCLI) which allows you to do essentially anything you can think of in vSphere. Of course this is all specific to VMware or Windows. What about including scripting commands for Pure Storage into PowerShell (PowerCLI) scripts? It is actually pretty simple using the readily available SSH plugin for PowerShell.
Ah access controls…always popular–who doesn’t want everyone to be admins?! Well…um…admins don’t! In this post I am going to run through integrating Active Directory with the Pure Storage FlashArray. Then talk about how it works with the vSphere Web Client Plugin because I would be ashamed if I didn’t at least mention VMware once in a post.
This is a post I plan on just updating on a rolling basis. I have been working on updating the vSphere and Pure Storage Best Practices document and there are few settings that can be tweaked to increase performance. A common question I have and occasionally receive is can this be easily simplified or automated? Of course! And PowerCLI is the best option in most cases–I will continue to add to this post or update it as I find newer or better ways of doing things.
****UPDATED SCRIPTS AND NEW FUNCTIONALITY check out this blog post for insight****
Update: get my scripts on my GitHub page here:
I posted a week or so ago about the ESXCLI UNMAP process with vSphere 5.5 on the Pure Storage FlashArray here and came up with the conclusion that larger block counts are highly beneficial to the UNMAP process. So the recommendation was simply use a larger block count than the default to speed up the UNMAP operation, something sufficiently higher than the default of 200 MB. I received a few questions about a more specific recommendation (and had some myself) so I decided to dive into this a little deeper to see if I could provide some guidance that was a little more concrete. In the end a large block count is perfectly fine–if you want to know more details–read on!
One of the many VMware integration pieces that I have been impressed with since I’ve joined Pure Storage was the vSphere Web Client Plugin. While not only being one of the first storage vendor plugins released for the Web Client but also one of the simplest ones that I have used.
One of the main things I have been doing in my first few weeks at Pure Storage (which has been nothing but awesome so far by the way) is going through all of our VMware best practices and integration points. Testing them, seeing how they work or can they be improved etc. The latest thing I looked into was Dead Space Reclamation (which from here on out I will just refer to as UNMAP) with the Pure Storage FlashArray and specifically ESXi 5.5. This is a pretty straight forward process but I did find something interesting that is worth noting.
Ah my first official post during my tenure at Pure and it couldn’t have happened at a better time! Just in time for the Purity 4.0 release which we just announced today. While there are plenty of under-the-cover enhancements I am going to focus on the two biggest parts of the release: new hardware and replication. There are other features such as for example hardware security token locking but I am not going to go into those in this post. So first let’s talk about the advancement in hardware!
This is a topic I have posted about in the past but this time I am going to speak about it with the Pure Storage FlashArray. Anyone familiar with the VMware Native Multipathing Plugin probably knows about the Round Robin “IOPS” value which I will interchangeably also refer to as the IO Operation Limit. This value dictates how often NMP switches paths to the device–after a configured number of I/Os NMP will move to a different path. The default value of this is 1,000 but can be changed to as low as 1. For the highest performance Pure recommends changing this setting to 1 for all devices. The tricky thing is that it has to be done for every device on every host and doing this in a simple way isn’t immediately obvious. But here is the procedure.