Category Archives: Pure Storage

What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

vSphere 6.7 core storage “what’s new” series:

In ESXi 6.0 and earlier, a total of 256 devices and 1,024 logical paths to those devices were supported. While this may seem like a tremendous amount of devices to some, there were plenty who hit it. In vSphere 6.5, ESXi was enhanced to support double both of those numbers. Moving to 512 devices and 2,048 logical paths.

In vSphere 6.7, those numbers have doubled again to 1,024 and 4,096 logical paths. See the limits here:

https://configmax.vmware.com/

Continue reading What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

PowerCLI and VVols Part I: Assigning a SPBM Policy

There are a variety of ways to assign and set a SPBM Policy to a VM. I recently put out a workflow package for vRO to everything VVols and Pure:

vRealize Orchestrator VVol Workflow Package

I also specifically blogged about assigning a policy to a VM with vRO:

Assigning a VVol VM Storage Policy with vRO

How do you do this with PowerCLI? Continue reading PowerCLI and VVols Part I: Assigning a SPBM Policy

What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots

vSphere 6.7 core storage “what’s new” series:

Been excited to do this series for awhile! Excited to see vSphere 6.7 released–there is a lot in it.

So as I did for 6.5, I am going to write a few posts on some of the core new storage features in 6.7. This is not going to be an exhaustive list of course, but some of the ones that I find interesting.

Let’s start with UNMAP!

Continue reading What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots

Updated PowerCLI Best Practices Scripts Check/Set

One of our continuing goals at Pure Storage is to simplify your life (when it comes to storage and VMware of course, your personal life is your own thing). So as much as possible, we try to reduce the number of things to set in vSphere to “work” better with the FlashArray. We don’t have configuration options on the array, so if we require you to make some kind of settings change on a host that is opened up as a bug internally at Pure: can we make the array respond/behave in a way that the change isn’t necessary?

As of the latest releases of vSphere (in 6.0 and in 6.5) our multipathing best practices are default (Round Robin and IO Operations Limit of 1). So one less thing.

Though there are a few things depending on your environment that need/should be set. Many of these are by default set correctly, so we merely check for those. A few still however need to be changed.

I have updated my blog post here:

FlashArray VMware Best Practices PowerCLI Scripts

See that for details.

Highlights are:

  • Improved interactivity of scripts
  • Specific cluster support
  • iSCSI setting fixes
  • VMFS-6 support for auto-unmap
  • Improved logging
  • Better PowerCLI module handling
  • Disk.DiskMaxIOSize support

Enjoy!

Tech Preview: vCenter Site Recovery Manager with ActiveCluster

An increasingly common use case for Active-Active replication in vSphere environments is vSphere Metro Storage Cluster (vMSC) which I wrote about in this paper recently:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/002ActiveCluster_with_VMware/PDF_Guide%3A_Implementing_vSphere_Metro_Storage_Cluster_With_ActiveCluster

This overviews how a stretched vSphere cluster interacts with the active-active replication we offer on the FlashArray called ActiveCluster. Continue reading Tech Preview: vCenter Site Recovery Manager with ActiveCluster

VVol VMUG Webinar Q&A Follow Up

I recently did a VMUG webcast on VVols and there were a ton of questions and unfortunately I ran out of time and could not answer a lot of them. I felt bad about that, so I decided to follow up. I was going to send out emails to the people who asked, but figured it was simpler and more useful to others to just put them all here.

See the VMUG VVol webinar here:

https://www.gotostage.com/channel/13896d6cf6304fddab1a485982c915dc/recording/762f0dccfe1c4406a0e5b58fea449e80/watch

You can get my slides here.

Questions:

Would VVols replace the requirements for RDM’s?

Answer:  Maybe. It depends on why you are using RDMs. If it is simply to allow sharing or overwriting between physical and virtual. VVols will replace RDMs. If it is to make it easier to restore from array snapshots, VVols will replace them. If it is for Microsoft Failover Clustering, VVols are not supported with that yet. You still need RDMs. Though VMware is supposed to be adding support for this in the next release. See this post for more info. Continue reading VVol VMUG Webinar Q&A Follow Up

vRealize Orchestrator VVol Workflow Package

Ok finally! I had this finished awhile ago, but I wrote it using our version 2.0 plugin–so I couldn’t post it until the plugin was certified by VMware. That plugin version is now certified and posted on the VMware Solution Exchange (see my post here).

Moving forward, we will likely be posting new workflows in various packages (working on an ActiveCluster one now), instead of including them directly in our plugin. This will make it easier to update them and add to them, without also having to generate an entire new plugin version.

So first, download and install the v2 FlashArray plugin for vRO and then install my workflow package for VVol on the VMware Solutions Exchange:

https://marketplace.vmware.com/vsx/solutions/flasharray-vvol-workflow-package-for-vro-1-0?ref=search 

Continue reading vRealize Orchestrator VVol Workflow Package

VMware Capacity Reporting Part V: VVols and UNMAP

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts.

Continue reading VMware Capacity Reporting Part V: VVols and UNMAP

VMware Capacity Reporting Part IV: VVol Capacity Reporting

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better.

NOTE: Examples in this are given from a FlashArray perspective. So mileage may vary depending on the type of array you have. The VMFS and above layer though are the same for all. This is the benefit of VMFS–it abstracts the physical layer. This is also the downside, as I will describe in these posts. Continue reading VMware Capacity Reporting Part IV: VVol Capacity Reporting

VMware Storage Capacity Reporting Part II: VMFS and Thick Virtual Disks

Storage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the concept of multiple levels of thin provisioning AND data reduction into it, all usage is not equal (does it compress well? does it dedupe well? is it zeroes?).

This multi-part series will break it down in the following sections:

  1. VMFS and thin virtual disks
  2. VMFS and thick virtual disks
  3. Thoughts on VMFS Capacity Reporting
  4. VVols and capacity reporting
  5. VVols and UNMAP

Let’s talk about the ins and outs of these in detail, then of course finish it up with why VVols makes this so much better. Continue reading VMware Storage Capacity Reporting Part II: VMFS and Thick Virtual Disks