What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support

vSphere 6.7 core storage “what’s new” series:

A while back I wrote a blog post about LUN ID addressing and ESXi, which you can find here:

ESXi and the Missing LUNs: 256 or Higher

In  short, VMware only supported one mechanism of LUN ID addressing which is called “peripheral”. A different mechanism is generally encouraged by the SAM called “flat” especially for larger LUN IDs (like 256 and above). If a storage array used flat addressing, then ESXi would not see LUNs from that target. This is often why ESXi could not see LUN IDs greater than 255, as arrays would use flat addressing for LUN IDs that number or higher.

ESXi 6.7 adds support for flat addressing.  Continue reading What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support

Troubleshooting Virtual Volume Setup

VVols have been gaining quite a bit of traction of late, which has been great to see. I truly believe it solves a lot of problems that were traditionally faced in VMware environments and infrastructures in general. With that being said, as things get adopted at scale, a few people inevitably run into some problems setting it up.

The main issues have revolved around the fact that VVols are presented and configured in a different way then VMFS, so when someone runs into an issue, they often do not know exactly where to start.

The issues usually come down to one of the following places:

  • Initial Configuration
  • Registering VASA
  • Mounting a VVol datastore
  • Creating a VM on the VVol datastore

I intend for this post to be a living document, so I will update this from time to time.

One more thing to note, this is all done with the VVol implementation on the Pure Storage FlashArray. So if you are using a different vendor, exact solutions may vary somewhat. But they should give you a general idea on where to start if you are not using Pure regardless.

NOTE: From this point on, I am going to assume you are (for whatever reason) not using our vSphere plugin to set things up, or have run into issues and you want to/need to manually do it. Or  you used it and it failed–you can refer to below to see the reasons why.

Initial Configuration

The initial configuration (environmental configuration) is often a cause.

For the FlashArray, we support only VASA version 3–so you must be on vSphere 6.5 or later end to end. This means vCenter 6.5 and ESXi 6.5. Not every host in vCenter must be on 6.5 of course, but the hosts that you want to use VVols must be. So check this first.

Furthermore, you must be on Purity 5.x or later to get VVol support. If your array is on an older release, contact Pure Storage support to upgrade.

Registering VASA

On the FlashArray, the VASA providers are built into the controllers so there is no need to install or configure anything, just as long as you are on Purity 5.x you are good to go.

But you do need to register VASA with vCenter–you can do this manually if you want, or you can use tools like our vSphere Plugin or our vRealize Orchestrator plugin.

If this fails, there are few possible reasons.

First off your vCenter must have IP management connection to both of the FlashArray controllers–this means they need to be able to reach the management IPs over TCP port 8084. There is a fairly simple way to check this, first off get the management IPs of your FlashArray, through the FlashArray GUI or whatever, generally this will be CT0.ETH0 and CT1.ETH1:

So for my first VASA provider it is 10.21.202.50.

If I SSH into my vCenter I can check for connectivity. The first option is just regular network connectivity, simple solution is a ping.

If this fails, you have some type of networking issue between vCenter and your array management ports. VLANs, or firewall.

If that works, but it still fails to register, also confirm it can get to the VASA provider over the right port which is 8084 via TCP. You can use the NC command for this:

I recommend adding the -v parameter so it actually reports the result. If it is anything but open, then there is a firewall issue between vCenter and the FlashArray. Verify that connectivity within your network.

If you are not using the plugin, the issue could be one of those, or you might just be typing the address incorrectly, remember it needs to be in the form of

https://<IP of controller ETH0>:8084

Do not use the virtual IP, do not use a DNS name.

Mounting a VVol Datastore

The next most common issue is that a VVol datastore does not mount properly.

vSphere Plugin Fails to Mount Datastore

You can use the vSphere plugin to do this or directly use vCenter. The plugin is the recommended option as it automates a few steps, most importantly presenting the protocol endpoint. The PE is what provides physical access to the array for VVols (the VVol datastore is  essentially just a capacity limit).

If the plugin fails with the following error:

That means that the protocol endpoint has already been connected to the corresponding host on the FlashArray. In this case you do not need to use the plugin to mount the VVol datastore, you can just do it manually, by right-clicking on the host or cluster and choosing New Datastore.

Let’s look at some issues that can occur with manual mounting of VVol Datastore.

VVol Datastore does not show up

When you click on New Storage in the vSphere Client and under the VVol Datastore listing, nothing shows up, this means one of two things:

  • You did not register the VASA provider(s)
  • You already mounted the available VVol datastores

In the case of the first issue, just go to your Hosts and Clusters view, then click on your vCenter, then the Configure tab, and then Storage Providers. Make sure that the respective VASA providers are registered and online.

VVol Datastore is Inaccessible

In order for an ESXi host to be able to access and use a VVol datastore, you need two things:

  • The host must have a protocol endpoint from that array presented to it.
  • The host must be able to talk to the VASA provider

If the host does not have either of these things, it will not be able to provision to that VVol datastore–and the VVol datastore will be marked as inaccessible to that host.

A host may not have access to a protocol endpoint for a variety of reasons.

Is the PE actually connected?

First off, it may not be actually connected to that host on the array. Ensure that it is, go to the Pure GUI (or CLI or whatever) and verify this. It should be listed under “Connected Volumes”

Generally, this volume will be called pure-protocol-endpoint, which is the automatically created one. Though some users might create their own (or rename this). You can tell if you have one or more PEs presented to a host if the volume name is black and not clickable.

If one is not connected, connect it. You can find Pure documentation on that here:

https://support.purestorage.com/Solutions/VMware_Platform_Guide/003Virtual_Volumes_-_VVols/Web_Guide%3A_Implementing_vSphere_Virtual_Volumes_with_FlashArray#Protocol_Endpoints 

If the host connected?

Just because the host has a PE connected to it on the FlashArray, does not mean things are ready. You want to make sure that things are physically connected. So for Fibre Channel–is zoning done? For iSCSI, are the iSCSI targets added to the ESXi host?

Is the PE properly seen in ESXi?

If the above things are good, you need to make sure that the protocol endpoint is configured correctly in ESXi. By default, it should be, but every so often things are changed.

If a protocol endpoint is in use, it will be listed under the Protocol Endpoints panel in the vSphere Client for a host.

Or via the CLI in ESXi:

If it is not in use (meaning there is not a VVol datastore on that host from the same array as that PE), they will not be listed in either place. You can verify it is present though by looking in the standard storage view:

A good way to quickly look if it is there is to sort by size, PEs will be 1 MB.

Also make sure the PE is in the “Attached” state. If it is “Dead or Error” this means access to the PE has been lost, so try a rescan, or check zoning or iSCSI config. Or possibly reboot ESXi.

Can’t Create a VM on the VVol datastore and/or it reports as 0 Size for Capacity

If all of this above is good, then another possibility is the HBA drivers are out of date.

In the ESXi host, run esxcli storage core adapter list to see if the capability is reported as “Second Level Lun ID”. If it is not, you need to update your HBA drivers in ESXi:

If these are not updated, ESXi will not be able to see sub-luns (VVols) in the SCSI path.

Beyond this there are a few other possibilities:

  • The vvol daemon needs to be restarted in the ESXi host. You can do this by running “/etc/init.d/vvold restart” in ESXi via SSH:
  • The ESXi host cannot talk to the VASA provider, use NC to make sure it can talk to it via port 8084 like above with vCenter: Note that if you get an error like below:it might mean not mean that some external firewall is blocking the port, it could mean that ESXi is blocking traffic to that port. This is the default behavior if no VVol datastores have been connected to that host–ESXi opens it the first time a VVol datastore is mounted and closes after the last one is unmounted. If this is the case, enable the vvold firewall rule in ESXi then run the nc command again:
  • Lastly, some issue with ESXi certificates. Occasionally, vCenter does not properly propagate certificates down to ESXi servers when a new provider is registered (the annoying thing is that some ESXi hosts will get it and others will not, we are working with VMware to nail down why) or certificates have expired. If either of these is expected to be the culprit (which usually means you have exhausted everything above, follow this blog post (or the relevant parts for just a single ESXi)  http://www.vstellar.com/2017/11/05/refreshregeneratereplace-esxi-6-0-ssl-certificates/

PowerCLI and VVols Part II: Finding VVol UUIDs

One of the great benefits of VVols is that fact that virtual disks are just volumes on your array. So this means if you want to do some data management with your virtual disks, you just need to work directly on the volume that corresponds to it.

The question is what virtual disk corresponds to what volume on what array?

Well some of that question is very array dependent (are you using Pure Storage or something else). But the first steps are always the same. Let’s start there for the good of the order.

Continue reading PowerCLI and VVols Part II: Finding VVol UUIDs

Data Mobility Demo Journey Part I: Virtual Volumes

At the Pure//Accelerate conference this year, my colleague Barkz and I gave a session on data mobility–how the FlashArray enables you to put your data where you want it. The session video can be found here:

https://watch.purestorage.com/ondemand/detail/videos/enterprise-applications/video/5778647922001/moving-data-between-cloud-and-on-premises-virtualized-environments?autoStart=true 

In short, the session was a collection of demos of moving data between virtual environments (Hyper-V and ESXi), between FlashArrays, and between on-premises and public using FlashArray features.

Continue reading Data Mobility Demo Journey Part I: Virtual Volumes

What’s New in Purity 5.1: WRITE SAME Handling Improvement

In Purity 5.1 there were a variety of new features introduced on the FlashArray like CloudSnap to NFS or volume throughput limits, but there were also a variety of internal enhancements. I’d like to start this series with one of them.

VAAI (VMware API for Array Integration) includes a variety of offloads that allow the underlying array to do certain storage-related tasks better (either faster, more efficiently, etc.) than ESXi can do them. One of these offloads is called Block Zero, which leverages the SCSI command called WRITE SAME. WRITE SAME is basically a SCSI operation that tells the storage to write a certain pattern, in this case zeros. So instead of ESXi issuing possibly terabytes of zeros, ESXi just issues a few hundred or thousand small WRITE SAME I/Os and the array takes care of the zeroing. This greatly speeds up the process and also significantly reduces the impact on the SAN.

WRITE SAME is used in quite a few places, but the most commonly encountered scenarios are:

What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP

vSphere 6.7 core storage “what’s new” series:

VMware has continued to improve and refine automatic UNMAP in vSphere 6.7. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted.

Continue reading What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP

ESXi iSCSI, Multiple Subnets, and Port Binding

With the introduction of our Active-Active Synchronous Replication (called ActiveCluster) I have been getting more and more questions around multiple-subnet iSCSI access. Some customers have their two arrays in different datacenters, and also different subnets (no stretched layer 2).

With ActiveCluster, a volume exists on both arrays, so essentially the iSCSI targets on the 2nd array just look like additional paths to that volume–as far as the host knows it is not a two arrays, it just has more paths.

Consequently, this discussion is the same as if you happen to have a single array using more than one subnet for its iSCSI targets or if you are using active-active across two arrays.

Though there are some different considerations which I will talk about later.

First off, should you use more than one subnet? Well keeping things simple is good, and for a single FlashArray I would probably do that. Chris Wahl wrote a great post on this awhile back that explains the ins and out of this well:

http://wahlnetwork.com/2015/03/09/when-to-use-multiple-subnet-iscsi-network-design/ 

Continue reading ESXi iSCSI, Multiple Subnets, and Port Binding

FlashArray vSphere Web Client now supports vSphere 6.7

Quick post–if you are looking at using vSphere 6.7, please note that only version of our plugin that works with 6.7 is version 3.1.x or later. There were some API changes that prevent it from properly loading in the 6.7 interface.

Reach out to support if you would like the latest version! This is still only for the Flash vSphere Web Client. We are working on building an HTML-5 supported one. Stay tuned on that.

Release notes are as follows:

What’s New

vSphere 6.7 Support
This release of the plugin includes support for vSphere 6.7. Users requiring support for vSphere 6.7 must upgrade to this version of the vSphere client plugin.

Continue reading FlashArray vSphere Web Client now supports vSphere 6.7

What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

vSphere 6.7 core storage “what’s new” series:

Another feature added in vSphere 6.7 is support for a guest being able to issue UNMAP to a virtual disk when presented through the NVMe controller.

Continue reading What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support

What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

vSphere 6.7 core storage “what’s new” series:

In ESXi 6.0 and earlier, a total of 256 devices and 1,024 logical paths to those devices were supported. While this may seem like a tremendous amount of devices to some, there were plenty who hit it. In vSphere 6.5, ESXi was enhanced to support double both of those numbers. Moving to 512 devices and 2,048 logical paths.

In vSphere 6.7, those numbers have doubled again to 1,024 and 4,096 logical paths. See the limits here:

https://configmax.vmware.com/

Continue reading What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits

"Remember kids, the only difference between Science and screwing around is writing it down"

%d bloggers like this: