Documentation Update, Best Practices and vRealize

So a few updates. I just updated my vSphere Best Practices guide and it can be found here:

Download Best Practices Guide PDF

I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.

  1. FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

Best Practices Guide

First of note, this guide has been updated for vSphere 6.5 (up to patch 1 in fact)–this is probably the most important addition in terms of content. So automatic UNMAP, VMFS-6 etc.

Also, I have pretty much retired the MaxHWTransferSize recommendation to change it from 4 MB (the default) to 16 MB (the max). The original reasoning for this change was to increase the speed of XCOPY sessions, but frankly, it has minimal, if not zero, impact on this when the FlashArray is involved. So it doesn’t really make sense for it to be a default best practice. So Pure Storage no longer requires (we never really did anyways) or even recommends it. But we certainly still support making this change.

The next change is explanation. A bit of a brain dump. I spent a lot more time explaining recommendations and considerations for common questions that I get. How do we work with SIOC? SDRS? Disk.SchedQuantum? Capacity management? Alerting? UNMAP? Etc.

I have also removed pretty much all of it “integration” stuff in there. This guide focuses on using ESXi and vCenter with the FlashArray. Obviously the integration helps dramatically, but this guide is more about understanding the concepts then necessarily how to integrate. I certainly recommend using the integration though as it makes everything even better. But I felt it was out of the scope of the document–as there was simply too much and it overwhelmed the core message of the paper.

vRealize How-To’s

I have written two other documents as well, here are the links:

  1.  FlashArray Plugin for vRealize Orchestrator User Guide
  2. Implementing FlashArray in a vRealize Private Cloud

The first one is self-explanatory. Basically, everything you need to know about how to use our vRO plugin. From the basics: How do I install? How do I run workflows? How do I manage inventory? But also goes much deeper. How do I create workflows? How do I use actions? How do I create actions? How do I use the custom scriptable objects and methods in Javascript? As much or as little as you need. Basically another brain dump.

The second is more of general integration of the FlashArray into the vRealize Suite. How do I configure a FlashArray with Log Insight (our content pack), Operations Manager (our management pack), Orchestrator (our plugin), and Automation (XaaS)? Basics on installations and configuration and use. Should get you started for sure.

Don’t forget about this Webinar I did with Jon Harris at VMware on vRA and vRO:

http://windowsitpro.com/pure-storage/vRealize-Automation

Look for a more detailed vRA white paper in the coming months.

4 Replies to “Documentation Update, Best Practices and vRealize”

  1. Have you done any testing with SQL performance and any gains or worth the effort in regards to this article and the new controller type?

    https://www.vladan.fr/vmware-virtual-hardware-performance-optimization-tips/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EsxVirtualization+%28ESX+Virtualization%29&utm_term=feedburner&_m=3n.0086.415.tu0ao076i4.8nb

    Virtual Storage Controler type

    Depending on your OS type and storage array, you can perhaps use a new virtual NVMe device (only for All Flash SAN/vSAN environment) which provides 30-50% lower CPU cost per I/O and 30-80% higher IOPS compared to virtual SATA devices.

    The Add New device wizard when adding a new virtual hardware to a VM which has Virtual Hardware 13 configured (vmx-13).

    Add new hardware wizard

    If not you can still use VMware Paravirtual storage controller type PVSCSI, which offers lower CPU usage.
    •How-to safely change from LSI logic SAS into VMware Paravirtual

    PVSCSI adapters are supported for boot drives in only some operating systems. For additional information see VMware KB article 1010398.

    Quote from VMware:

    PVSCSI and LSI Logic Parallel/SAS are essentially the same when it comes to overall performance capability. PVSCSI, however, is more efficient in the number of host compute cycles that are required to process the same number of IOPS. This means that if you have a very storage IO intensive virtual machine, this is the controller to choose to ensure you save as many cpu cycles as possible that can then be used by the application or host. Most modern operating systems that can drive high IO support one of these two controllers.

    Here’s a detailed whitepaper
    http://www.vmware.com/files/pdf/1M-iops-perf-vsphere5.pdf

    1. In short, unless you are doing 50,000 or more IOPS you won’t be limited by either. Though PVSCSI does have less CPU overhead, which is important. the main thing is if you are doing very heavy workloads 100,000 IOPS+ you will be limited by the queue depth limit of LSI. PVSCSI has double the overall limit by default per adapter and can quadruple it. The per virtual disk limit is the same by default, but PVSCSI can go up to 8 times LSI. So PVSCSI can potentially do far more IOPS. At default settings you will not see much of a difference though, you need to increase the PVSCSI settings to really go nuts. With the NVMe adapter there is not much benefit (today–further ESXi enhancements will change this) but some testing still remains.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.