ESXi iSCSI, Multiple Subnets, and Port Binding

With the introduction of our Active-Active Synchronous Replication (called ActiveCluster) I have been getting more and more questions around multiple-subnet iSCSI access. Some customers have their two arrays in different datacenters, and also different subnets (no stretched layer 2).

With ActiveCluster, a volume exists on both arrays, so essentially the iSCSI targets on the 2nd array just look like additional paths to that volume–as far as the host knows it is not a two arrays, it just has more paths.

Consequently, this discussion is the same as if you happen to have a single array using more than one subnet for its iSCSI targets or if you are using active-active across two arrays.

Though there are some different considerations which I will talk about later.

First off, should you use more than one subnet? Well keeping things simple is good, and for a single FlashArray I would probably do that. Chris Wahl wrote a great post on this awhile back that explains the ins and out of this well:

http://wahlnetwork.com/2015/03/09/when-to-use-multiple-subnet-iscsi-network-design/ 

Continue reading ESXi iSCSI, Multiple Subnets, and Port Binding

FlashArray vSphere Web Client now supports vSphere 6.7

Quick post–if you are looking at using vSphere 6.7, please note that only version of our plugin that works with 6.7 is version 3.1.x or later. There were some API changes that prevent it from properly loading in the 6.7 interface.

Reach out to support if you would like the latest version! This is still only for the Flash vSphere Web Client. We are working on building an HTML-5 supported one. Stay tuned on that.

Release notes are as follows:

What’s New

vSphere 6.7 Support
This release of the plugin includes support for vSphere 6.7. Users requiring support for vSphere 6.7 must upgrade to this version of the vSphere client plugin.

Continue reading FlashArray vSphere Web Client now supports vSphere 6.7

What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

vSphere 6.7 core storage “what’s new” series:

Another feature added in vSphere 6.7 is support for a guest being able to issue UNMAP to a virtual disk when presented through the NVMe controller.

Continue reading What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

vSphere 6.7 core storage “what’s new” series:

In ESXi 6.0 and earlier, a total of 256 devices and 1,024 logical paths to those devices were supported. While this may seem like a tremendous amount of devices to some, there were plenty who hit it. In vSphere 6.5, ESXi was enhanced to support double both of those numbers. Moving to 512 devices and 2,048 logical paths.

In vSphere 6.7, those numbers have doubled again to 1,024 and 4,096 logical paths. See the limits here:

https://configmax.vmware.com/

Continue reading What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

PowerCLI and VVols Part I: Assigning a SPBM Policy

There are a variety of ways to assign and set a SPBM Policy to a VM. I recently put out a workflow package for vRO to everything VVols and Pure:

vRealize Orchestrator VVol Workflow Package

I also specifically blogged about assigning a policy to a VM with vRO:

Assigning a VVol VM Storage Policy with vRO

How do you do this with PowerCLI? Continue reading PowerCLI and VVols Part I: Assigning a SPBM Policy

What’s New in Core Storage in vSphere 6.7 Part II: Sector Size and VMFS-6

vSphere 6.7 core storage “what’s new” series:

In vSphere 6.5, a new version of VMFS was introduced–VMFS-6. A behavior that many noted was that it was not always the default option for their storage. ESXi (unless told otherwise) would default to formatting some storage with VMFS-5. So when you installed ESXi, the default datastore that gets created would be VMFS-5.

The issue with this was that VMFS-5, was well not VMFS-6. Not automatic UNMAP etc. Furthermore, there is no upgrade path besides deleting the file system and then reformatting with VMFS-6. This of course was a bit annoying for many.

Continue reading What’s New in Core Storage in vSphere 6.7 Part II: Sector Size and VMFS-6

What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots

vSphere 6.7 core storage “what’s new” series:

Been excited to do this series for awhile! Excited to see vSphere 6.7 released–there is a lot in it.

So as I did for 6.5, I am going to write a few posts on some of the core new storage features in 6.7. This is not going to be an exhaustive list of course, but some of the ones that I find interesting.

Let’s start with UNMAP!

Continue reading What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots

Updated PowerCLI Best Practices Scripts Check/Set

One of our continuing goals at Pure Storage is to simplify your life (when it comes to storage and VMware of course, your personal life is your own thing). So as much as possible, we try to reduce the number of things to set in vSphere to “work” better with the FlashArray. We don’t have configuration options on the array, so if we require you to make some kind of settings change on a host that is opened up as a bug internally at Pure: can we make the array respond/behave in a way that the change isn’t necessary?

As of the latest releases of vSphere (in 6.0 and in 6.5) our multipathing best practices are default (Round Robin and IO Operations Limit of 1). So one less thing.

Though there are a few things depending on your environment that need/should be set. Many of these are by default set correctly, so we merely check for those. A few still however need to be changed.

I have updated my blog post here:

FlashArray VMware Best Practices PowerCLI Scripts

See that for details.

Highlights are:

  • Improved interactivity of scripts
  • Specific cluster support
  • iSCSI setting fixes
  • VMFS-6 support for auto-unmap
  • Improved logging
  • Better PowerCLI module handling
  • Disk.DiskMaxIOSize support

Enjoy!

Tech Preview: vCenter Site Recovery Manager with ActiveCluster

An increasingly common use case for Active-Active replication in vSphere environments is vSphere Metro Storage Cluster (vMSC) which I wrote about in this paper recently:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/002ActiveCluster_with_VMware/PDF_Guide%3A_Implementing_vSphere_Metro_Storage_Cluster_With_ActiveCluster

This overviews how a stretched vSphere cluster interacts with the active-active replication we offer on the FlashArray called ActiveCluster. Continue reading Tech Preview: vCenter Site Recovery Manager with ActiveCluster

VVol VMUG Webinar Q&A Follow Up

I recently did a VMUG webcast on VVols and there were a ton of questions and unfortunately I ran out of time and could not answer a lot of them. I felt bad about that, so I decided to follow up. I was going to send out emails to the people who asked, but figured it was simpler and more useful to others to just put them all here.

See the VMUG VVol webinar here:

https://www.gotostage.com/channel/13896d6cf6304fddab1a485982c915dc/recording/762f0dccfe1c4406a0e5b58fea449e80/watch

You can get my slides here.

Questions:

Would VVols replace the requirements for RDM’s?

Answer:  Maybe. It depends on why you are using RDMs. If it is simply to allow sharing or overwriting between physical and virtual. VVols will replace RDMs. If it is to make it easier to restore from array snapshots, VVols will replace them. If it is for Microsoft Failover Clustering, VVols are not supported with that yet. You still need RDMs. Though VMware is supposed to be adding support for this in the next release. See this post for more info. Continue reading VVol VMUG Webinar Q&A Follow Up

"Remember kids, the only difference between Science and screwing around is writing it down"

%d bloggers like this: