Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Continue reading “Documentation Update, Best Practices and vRealize”

What’s New in vSphere 6.5 Storage vSpeaking Podcast

Quick post. Recently I had the pleasure of, alongside Cormac Hogan being invited on to the Virtually Speaking Podcast hosted by Pete Flecha (Technical Marketing for VMware’s Storage and Availability products with a focus on VVols) and John Nicholson (Technical Marketing for VMware’s Storage and Availability products with a focus on vSAN) .

Had a lot of fun, we spoke of the new features of vSphere 6.5 from a core storage standpoint–a lot about what I have been posting about in recent days: UNMAP, VMFS-6 etc. This invitation was due to our work writing the “What’s New in Core Storage in vSphere 6.5” white paper.

https://blogs.vmware.com/virtualblocks/2016/11/28/announcing-vsphere-6-5-core-storage-white-paper/

Check out the podcast here:

https://soundcloud.com/virtuallyspeakingpodcast/episode-34-vsphere-65-core-storage

What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support

This is the fourth in my series of what’s new in ESXi 6.5 storage. Here are the previous posts:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Here is another post for vSphere 6.5 UNMAP! So many improvements and this is a big one for many users. Certainly makes me happy. Previously, in vSphere 6.0.x, when in-guest space reclamation was introduced, the enabling of change block tracking for a given virtual disk blocked the guest OS from being able to issue UNMAP to that disk and therefore prevented it from leveraging the goodness it provides. Rumor has it that this undesirable behavior continued in vSphere 6.5…

leecorso

Continue reading “What’s new in ESXi 6.5 Storage Part IV: In-Guest UNMAP CBT Support”

What’s new in ESXi 6.5 Storage Part III: Thin hot extend

Let me start this post off with saying that the “What’s new in vSphere 6.5 Storage” white paper has been officially published and can be read here:

https://storagehub.vmware.com/#!/vsphere-core-storage/

I had the distinct pleasure of helping Cormac and Paudie with this paper. Thanks to both of them for including me and providing me with access to the engineers who wrote these features/enhancements!

So anyways, read that document for a high level of all of the new features and enhancements. Previously, I have written two posts in this series:

What’s new in ESXi 6.5 Storage Part I: UNMAP

What’s new in ESXi 6.5 Storage Part II: Resignaturing

This is a short post, mainly wanted to share the white paper, but it is important to note that VMware is still marching forward with improving VMFS and virtual disk flexibility. So I wanted to highlight a new enhancement. Thin virtual disk hot extension.

Prior to vSphere 6.5, thin virtual disks could be hot extended, but there were limits. The main one being if the extend operation brought the VMDK size to larger than 2 TB (or the VMDK was already 2 TB) the operation was not permitted:

hotextend60u2If the VM is turned on and I try to apply this configuration change, I get an error:
hotextendfail

So this is fixed in vSphere 6.5! And the nice thing is that it does not require either VMFS 6 or the latest version of virtual machine hardware. Just hosting the VM on a 6.5 host will provide this functionality:

hotextend65good

hotextendsuccess

Sweet! But this really just re-enforces my thought that there are few remaining reasons to not use thin virtual disks with the latest releases of vSphere. So much more flexible and a lot of engineering is going into them to make them better. Not much work is being done on thick-type virtual disks. Look for an upcoming blog on some performance enhancements as well.

 

 

Pure Storage vSphere Web Client Plugin and Multiple vCenters

I was working on rebuilding my VMware lab environment today and for simplicity’s sake decided to leverage two external Platform Services Controllers (PSC), one for each of my vCenter environments (I need two because I am setting up Site Recovery Manager) in a federated manner. Akin to the previous term of “linked mode” If you are not familiar with PSC which was added in vSphere 6.0, check out this KB. I went with the 3rd deployment model illustrated in this KB. Continue reading “Pure Storage vSphere Web Client Plugin and Multiple vCenters”

FlashArray and VMware documentation update for vSphere 6

I have completed updates for two of my main VMware vSphere documents for the Pure Storage FlashArray. These include the standard best practices document and the white paper explaining VAAI in detail and how it works on the FlashArray.

vmwarebpvaai

 

 

 

 

 

 

 

 

Best Practices Document Link

VAAI White Paper Link

The best practices document has mainly been updated with information that this blog has shown in the past couple of months. Notably:

  • vSphere 6 updates, support for Web Client Plugin versions, changes in virtual disk recommendations, in-guest UNMAP support, etc
  • VMFS UNMAP changes when it comes to best practice recommendations
  • vRealize Operations Management Pack
  • EFI-enabled VMs and Disk.DiskMaxIOSize

In the VAAI document, it is a similar update:

  • vSphere 6 changes, mainly focused on the thin virtual disk XCOPY enhancements
  • UNMAP changes, block counts, performance and in-guest support (EnableBlockDelete)

Both documents are also updated for FlashArray//m, but it is mainly a cosmetic change as nothing really changes for the VMware environment, no recommendations are changed. Of course the documents are also cleaned up and re-arranged to be more reader friendly with a semi-new format as well.

Important! If you have old versions of these documents, delete them! These get updated frequently (a few times a year at least) and these changes can be important. When needing to refer to the guides, please check back to the Pure Storage community for the latest version.

Enjoy! As always feedback on these documents is ALWAYS welcome.

 

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”

Setting up iSCSI with VMware ESXi and the FlashArray

I’ve been with Pure Storage for about ten months (time flies!) and a noticeable trend I’ve seen in the past six or so months is in the number of customers who are deciding to use iSCSI as their storage protocol of choice. This is increasingly common in greenfield environments where they don’t want to invest in a Fibre Channel infrastructure. I’ve helped quite a few set this up in VMware environments so I thought I would put a post together on configuring ESXi software iSCSI with the Pure Storage FlashArray (I have yet to see a hardware iSCSI setup).

Before I begin, I highly recommend reading the following two documents from VMware:

http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

They are not long and provide very good insight into the how/what/why of iSCSI on VMware. Some of the images are a bit old, but the underlying concepts have not changed. Continue reading “Setting up iSCSI with VMware ESXi and the FlashArray”

Pure Storage vSphere Web Client Plugin 2.0 Released

The vSphere Web Client Plugin for the Pure Storage FlashArray has been updated and released and it is the largest update to the plugin since, well, it was first released. A lot of feature enhancements–the majority focused on integrating local and remote replication management into the plugin. Our long term goal is to offer feature parity of FlashArray management with the plugin as compared to our own GUI. It is getting close. Let’s take a look at the new features.intro

Continue reading “Pure Storage vSphere Web Client Plugin 2.0 Released”

FlashStack VMware vSphere Reference Architecture

Pure Storage announced last week our very first converged architecture offering appropriately named FlashStack. The initial release of FlashStack is built off of Cisco hardware (UCS of course) and the FlashArray. We have two reference architectures presently, one for VMware Horizon View and one for general purpose VMware vSphere environments (choose your own guest OSes). My colleague Ravi Venkat (@ravivenk) architected the View ref arch, while I focused on the general vSphere one. In this blog post I am going to overview what we did with the vSphere ref arch. For more information on either, refer to the respective reference architecture white papers at the usual place:

pure_storage_whitepaper_flashstack_horizonviewpure_storage_whitepaper_flashstack_vsphere

 

 

 

Continue reading “FlashStack VMware vSphere Reference Architecture”