So a few updates. I just updated my vSphere Best Practices guide and it can be found here:
Download Best Practices Guide PDF
I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.
- FlashArray Plugin for vRealize Orchestrator User Guide
- Implementing FlashArray in a vRealize Private Cloud
Continue reading Documentation Update, Best Practices and vRealize
This is certainly not my first post about UNMAP and I am pretty sure it will not be my last, but I think this is one of the more interesting updates of late. vSphere 6.0 has a new feature that supports the ability for direct UNMAP operations from inside a virtual machine issued from a Guest OS. Importantly this is now supported using a virtual disk instead of the traditional requirement of a raw device mapping.
Continue reading Direct Guest OS UNMAP in vSphere 6.0
The vSphere Web Client Plugin for the Pure Storage FlashArray has been updated and released and it is the largest update to the plugin since, well, it was first released. A lot of feature enhancements–the majority focused on integrating local and remote replication management into the plugin. Our long term goal is to offer feature parity of FlashArray management with the plugin as compared to our own GUI. It is getting close. Let’s take a look at the new features.
Continue reading Pure Storage vSphere Web Client Plugin 2.0 Released
Ah access controls…always popular–who doesn’t want everyone to be admins?! Well…um…admins don’t! In this post I am going to run through integrating Active Directory with the Pure Storage FlashArray. Then talk about how it works with the vSphere Web Client Plugin because I would be ashamed if I didn’t at least mention VMware once in a post.
Continue reading Integrating Active Directory with the Pure Storage FlashArray
I posted a week or so ago about the ESXCLI UNMAP process with vSphere 5.5 on the Pure Storage FlashArray here and came up with the conclusion that larger block counts are highly beneficial to the UNMAP process. So the recommendation was simply use a larger block count than the default to speed up the UNMAP operation, something sufficiently higher than the default of 200 MB. I received a few questions about a more specific recommendation (and had some myself) so I decided to dive into this a little deeper to see if I could provide some guidance that was a little more concrete. In the end a large block count is perfectly fine–if you want to know more details–read on!
Continue reading Deeper dive on vSphere UNMAP block count with Pure Storage
One of the many VMware integration pieces that I have been impressed with since I’ve joined Pure Storage was the vSphere Web Client Plugin. While not only being one of the first storage vendor plugins released for the Web Client but also one of the simplest ones that I have used.
Continue reading The Pure Storage Plugin for the vSphere Web Client
One of the main things I have been doing in my first few weeks at Pure Storage (which has been nothing but awesome so far by the way) is going through all of our VMware best practices and integration points. Testing them, seeing how they work or can they be improved etc. The latest thing I looked into was Dead Space Reclamation (which from here on out I will just refer to as UNMAP) with the Pure Storage FlashArray and specifically ESXi 5.5. This is a pretty straight forward process but I did find something interesting that is worth noting.
Continue reading VMware Dead Space Reclamation (UNMAP) and Pure Storage
As you might have read on my blog a few days ago, EMC released an updated version of the Virtual Storage Integrator tool for vSphere Web Client that supports direct provisioning and some management of VNX and VMAX storage. The previous version supported ViPR-only provisioning. If you didn’t see that post you can check it out here. Inevitably when a product involves cross-application and importantly cross-server integration many customers ask the question about what are the firewall requirements to get this thing to work? Let’s take a look.
Continue reading Firewall requirements for EMC VSI 6.1 for vSphere Web Client
Today the long-awaited update to Virtual Storage Integrator for the vSphere Web Client as been released! Six months or so ago EMC released the first iteration of the VSI Web Client (version 6.0) that supported provisioning of storage but only for environments enabled with ViPR. The latest release (version 6.1) now adds support for direct provisioning of storage from a VMAX or VNX array.
Continue reading Virtual Storage Integrator for vSphere Web Client