Setting up iSCSI with VMware ESXi and the FlashArray

I’ve been with Pure Storage for about ten months (time flies!) and a noticeable trend I’ve seen in the past six or so months is in the number of customers who are deciding to use iSCSI as their storage protocol of choice. This is increasingly common in greenfield environments where they don’t want to invest in a Fibre Channel infrastructure. I’ve helped quite a few set this up in VMware environments so I thought I would put a post together on configuring ESXi software iSCSI with the Pure Storage FlashArray (I have yet to see a hardware iSCSI setup).

Before I begin, I highly recommend reading the following two documents from VMware:

http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

They are not long and provide very good insight into the how/what/why of iSCSI on VMware. Some of the images are a bit old, but the underlying concepts have not changed.

UPDATE: Another post on iSCSI CHAP authentication can be found here.

1) Configure SATP

2) Create vSwitch and VMkernel adapters

3) Configure Physical NIC and VMkernel Adapter Relationships

4) Create the Software iSCSI Adapter

5) Configure the FlashArray

6) Configure the Software iSCSI Adapter

7) Provision a Volume

8) Verify Connectivity

Some prerequisites to this walkthrough:

  1. Physical network is up and cabled
  2. ESXi/vCenter is installed
  3. IP addresses are available for the iSCSI targets on the FlashArray and at least one (preferably two or more) addresses are free for each ESXi host that needs to be configured.

I will be using ESXi 5.5 U2 with a FlashArray 420 running Purity 4.0.16. I will show the end-to-end process for one ESXi host, so for other hosts just lather, rinse and repeat.

1) Configure SATP

A very important step–and this is for any ESXi host using storage (regardless of protocol) on the FlashArray: Create a SATP rule so that all Pure Storage FlashArray volumes are configured to use Round Robin multipathing and an IO Operations Limit of 1. This can be achieved in multiple ways (PowerCLI, SSH etc). I will just SSH into the ESXi box and run the command. This only needs to be done once in lifetime of a given ESXi host:

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1"

2) Create vSwitch and VMkernel adapters 

The next step is to create the required vSphere networking for the software iSCSI initiator (that we will create shortly) to access the physical network. In my environment I have two free 10 Gb/s physical adapters to use. For resiliency, you definitely want more than one. You can use a distributed switch or a standard switch, either is, of course, fine. I am going to use one standard switch for this walkthrough.

iscsi

In order to connect an iSCSI adapter to a physical adapter you have to create a standard VMkernel port first. So create and the “add host networking” button and choose “VMkernel Network Adapter” then a new switch.

choosevmkernel

choosenewswitch

In the next step, add both (or more) physical NICs to the active adapters list.

addbothnics to switch

confirmnicaddtoswitch

Give the VMkernel port a name and the rest of the default are fine then assign it an IP address and a subnet mask.

namevmkernl1

addipvmkernel1

Finish the wizard and click the “Add Host Networking” again to add the second VMkernel adapter. Each adapter will leverage one of the two physical NICs which we will configure in a second. Repeat the process to add the second VMkernel adapter to the switch we just created (vSwitch1).

chooseswitch

The switch will look like so after creating the second VMkernel port:

verifyvmkernelports

3) Configure Physical NIC and VMkernel Adapter Relationships

Software iSCSI adapters require that any VMkernel adapter that it uses must only have one active physical NIC and no standby NICs. The vSwitch we created has two active NICs so we need to override this behavior for each VMkernel adapter. Each one should have a different active NIC for resiliency. To do this, select one of the VMkernel ports by clicking on the name in the vSwitch chart which will highlight in blue (“iSCSI 2” in the case of this image) and then click the pencil/edit icon up and to the left of it.

editsettingsvmkernelportClick on the “Teaming and failover” selection and choose one of the vmnics to be the active adapter and push all others down to unused. Repeat this process for the other VMkernel adapters so each uses a different one as their active adapter. You will need to select the “Override” option to do this. This will override the teaming policy of the vSwitch itself. Remember make sure no adapters are configured as standby.

iscsikernel2-failover

4) Create the Software iSCSI Adapter

The next step is to create the software iSCSI adapter. Go to Storage and then Storage Adapters pane in the Web Client and click the green plus and choose “Software iSCSI adapter.” If this is grayed out it means one has already been created–you can only have one per ESXi host.

createsoftwareiscsi

There is no wizard at this point–just a message confirming the creation of the adapter.

softwareiscsiconfirm

You will see the adapter now listed in the storage adapters list now. Select it and the configuration will appear below.

5) Configure the FlashArray

Before we can configure the software iSCSI adapter we must configure the FlashArray itself. This includes creating the host information on it and also the iSCSI target ports. First let’s enter the host information into the FlashArray. If the software iSCSI initiator is still selected, go to the configuration below and highlight the iSCSI name (IQN) and copy it with CTRL-C on your keyboard (or whatever you do with those Macs people seem to use these days).

getinitiatorIQN

Now log into the FlashArray GUI. Click on the storage tab, click the plus icon and then “Create Host” next to the Hosts list on the left hand side. Enter a name for the host that makes sense and click create.

startcreatehost

createhost

Now click on the sub-tab called Host Ports and then the gear icon to the right of it on the right and choose “Configure iSCSI IQNs.”

startconfigureIQN

Paste the IQN still in your clipboard (CTRL-V) into the window that appears. Click Add to finish.

enterisqnonhostonFA

The host is now configured. Now we need to configure the iSCSI target ports. Click on the System tab and then on System > Networking in the left hand column that appears. There will be a listing of ports in the main table that appears. If iSCSI is not configured no IP addresses will appear in the ethernet port listings that are associated with iSCSI service. If they are not configured, assign each one IP information by clicking near the name and then Edit. Ensure you are only editing ones that have iSCSI listed in their “Service(s)” column.

starteditiscsiipFA

edittargetiscsiIPonFA

Note this is also where you configure Jumbo Frames if you choose to do so. Remember Jumbo Frames is an end to end change, including the array, switches and ESXi, so plan accordingly if you choose to do so. One of the parts of the chain not being configured for Jumbo Frames will break the setup. Pure Storage does not require Jumbo Frames, so it is up to you.

Lastly, choose one of the iSCSI ports and record the IP address–any one will do. Go back to the vSphere Web Client.

6) Configure the Software iSCSI Adapter

Check out this blog post for a better understanding of the iSCSI multipathing options.

Click on the “Network Port Binding” tab and the green plus sign.

addportstoiscsi

Add both of the VMkernel adapters previously created to the iSCSI adapter. Only one can be done at a time so the process will need to be run through twice to get both VMkernel adapters.

addnictoiscsiadapter

bothnicsaddedtoiscsi

The next step is the add the iSCSI target ports on the FlashArray to the software iSCSI adapter. The simplest way to do this is to use dynamic discovery. You just need to supply one IP address of an iSCSI port on the array and all of the ports will automatically be configured on the software iSCSi adapter. If you do not want to have all of them you will need to manually configured them in static discovery.

***Also note, if you plan on using CHAP, do not use dynamic discovery, this is currently not supported by Purity. This will be coming soon***

To do dynamic discovery click on the “Targets” tab and then “Dynamic Discovery” and then the “Add” button.addtargets

Enter in the IP address of the port you recorded from the FlashArray earlier  and click OK.

adddynamictarget

The default port is correct. If a different one is configured for some reason on the FlashArray, change the port accordingly.

Now click on the “Static Discovery” button to see the populated target ports.

static

Now configure the best practice iSCSI advanced options (delayedAck and Login Timeout) on a per-target basis instead on the entire software iSCSI adapter click on the dynamic discovery listing you just added and click the advanced button on the upper right part of the panel. Uncheck the “Enabled” box next to each settings to override inheritance and alter the values to the proper settings (30 seconds and disabled).

advancedoptionsdynamictarget

pertarget-advaned

All of the static targets that appeared from this dynamic target will now get the proper configurations. This will allow multiple arrays with disparate recommendations to co-exist on the same software iSCSI adapter.

This will add all of the target ports on the array so any volumes may have a lot of paths. In this case, I have two physical NICs and 8 target iSCSI ports on the array for a total of 16 paths per device discovered. So if you want fewer (so you don’t hit the 1,024 logical path limit of ESXi) you may want to consider static discovery and enter the paths you choose. You need the IP and IQN of every target port you want instead of just one IP like in dynamic discovery. You can get the IPs from the networking screen we looked at earlier and the IQNs from the System tab under Host Connections.

targetiqns

7) Provision a Volume

Now it is time to add a volume. Go to the Pure Storage GUI and add a volume like in the picture below (or of course use the Web Client Plugin).

newvol1

newvol2Rescan the ESXi host and the volume will appear.

rescan

8) Verify Connectivity

You will see the Paths tab populate with the 16 paths as well as the devices tab populate with the new device.

paths

device

If you examine the device in more detail you will see it is configured to use the Round Robin Path Selection Policy as well due to the SATP rule we created at the outset.

devicedetails

The last thing you will want to do is to check the array and make sure that it is seeing proper logins on all of the proper ports. Go to the FlashArray GUI and then the System tab then the Host Connections screen. Locate the host you created and make sure it displays the redundant connections green box. The overview ports should show a 1 in them, meaning that there is one host initiator that can reach that port and is logged on. It is important to note that this number refers to how many iSCSI adapters there are logged in from that host–this is not referring to physical NICs because the array has no knowledge of this. Host-based physical redundancy with software iSCSI can only be verified from the host itself. In the case of software iSCSI this screen will just verify whether or not the host can leverage both controllers.

arrayredundant

4 thoughts on “Setting up iSCSI with VMware ESXi and the FlashArray”

  1. Hi Cody, Great article! As of VMware 6.5 (hardware version 13) there is the option to use an NVMe controller inside the guest OS have you or pure been able to run any performance testing with this?

    1. Thank you! Yeah I have done some. I have also had some conversations with VMware about it. There are very limited situations where it will help from a performance perspective…today… This is really somewhat of a tech preview as to what is to come in a larger NVMe stack. Right now there is NVMe to SCSI conversion so the PSA stack is a bit of a bottleneck.At this point I would stick with PVSCSI unless told otherwise by VMware

Leave a Reply

Your email address will not be published. Required fields are marked *