Adding capacity to ScaleIO in a VMware environment

When initially installing/configuring ScaleIO in a VMware environment the creation of a storage pool and adding capacity to it is included in the setup process. Obviously every time you want to add a storage pool, add capacity or simply create a new volume you don’t want to have to run the setup process again–that would be silly. And of course you do not have to, nor should you. So how do you add more capacity without adding additional nodes? Let’s find out!

My current environment has four ESXi hosts and one SDS/SDC VM per host (my SDCs and SDSs are the same VM in my environment). Each SDS currently has one virtual disk using the full capacity of a VMFS on top of a physical disk. The plan is to double the capacity of each SDS by adding a new physical disk to each ESXi host and presenting the full capacity (minus the space on the disk reserved for VMFS metadata) via a virtual disk to each SDS. The below image shows the current environment for one ESXi host and also how it will look after the capacity is added.

scaleio_environment

First let’s look at how much storage we have in our storage pool. We can do this one of two ways, either through the GUI or via scli command line. There are few ways to get the information we want via command line, but I will choose the following command and run it via a SSH session from my primary MDM:

scli –query_storage_pool –protection_domain_name <name> –storage_pool_name <name>

Insert the name of the protection domain and the storage pool and you will see the following:

scaleio_capacity_before

And the GUI:

scaleio_capacity_beforeGUI

As you can see we have about 4 TB of total capacity and 1.5 TB left to use. So I am going to double the capacity of my ScaleIO storage pool to 8 TB by adding a 1 TB disk to each of my four servers I currently have in the environment. As mentioned before, I currently have one 1 TB virtual disk presented to each of my SDS VMs (one per ESXi server). Now I have added a new physical internal disk to each of my ESXi hosts and formatted them as a VMFS volume. Now I have to add that capacity to the SDS VM and I can do that by creating a virtual disk that consumes the entire free space of the VMFS and adding it to the SDS VM.

Side note: Technically you could skip the VMFS abstraction and create a raw device mapping, but doing so prevents things like vMotion etc. so best practice is to always present storage to a SDS via a virtual disk. Furthermore, that virtual disk should also always be created in the eagerzeroedthick format (for the normal reasons you would use a EZT disk: performance, space reservation etc.).

scaleio_add_new_vmdk
No, you don’t have to use the old client, the web client works just as well too

Finish the wizard and repeat for each SDS VM you want to add capacity to. In my case I will do this for all four of my SDS VMs.

Once the virtual disks have been added to the SDSs I am going to SSH into one of the SDS VMs to rescan the bus so ScaleIO can see the new device. Note that it could take a long time for the eagerzeroedthick virtual disk to be created (depending on the size) since they are pre-zeroed, so make sure the operation is complete before proceeding.

First I will check the current partitions using “cat /proc/partitions” to help me know what partitions currently exist.

catpartitions_before

Now to rescan the bus. Run the following command:

echo "- - -" > /sys/class/scsi_host/host2/scan

These two are the ONLY commands in this process that have to be run from each SDS locally.

In the following image I run the scan and then recheck the partitions list and you can see by comparing the previous list, the partition “sdc” has appeared which is indeed the virtual disk I just added.

catpartitions_after

Repeat these rescan steps on each SDS. So now that the OS can see the storage it is time to add it to ScaleIO. So SSH into the primary MDM (or anywhere scli is installed–I use the MDM because then I do not need to include the MDM IP in every command I issue).

To add the new device into the storage pool run the following command:

scli --add_sds_device --sds_name <SDS name> --storage_pool_name <storage pool name> --device_name <device name>

Some notes:

  • If you are running this from a VM other than the MDM you will need to supply the MDM IP in the commands as well.
  • If you do not have a default storage pool or you want to add it to a specific one indicate the storage pool name. If you just want to add it to the default pool omit the –storage_pool_name option entirely.
  • The SDS can be referenced by its ID number, name or IP (–sds_id, –sds_name, –sds_ip respectively). I chose name in this case.

I will be adding device “/dev/sdc” to storage pool “scio-pool1” on the SDS named “scio-sds1”

add_sds_device

 

By default, the operation also initiates performance tests to the new device. Until the test is complete, the device capacity cannot be used, but it usually does not take long (a matter of seconds for me). The SDS will perform two performance tests; one will test random writes, and one will test random reads. When the tests are complete, the device capacity will automatically be added to the indicated storage pool. You can query the results of the tests by running the following command:

scli --query_sds_device_test_results --sds_name <sds name> --device_name <device name>

device_tests_results

If the other SDS VMs have had their virtual disks added and rescanned you can add them now from the MDM too. All of my new devices happen to be /dev/sdc, but this may not be the case for you.

add_other_devices

 

If you watch the GUI (or use the CLI) you will see the capacity rise shortly after the command finishes and the performance tests are complete. Now that I have added 4 TB to the storage pool (1 TB per SDS) the GUI reports a new capacity of 8 TB.

gui_new_Capacity

 

Lastly, you will notice (once again both in the GUI and CLI) that once new storage is available ScaleIO will rebalance the data in the storage pool across the new devices so that all of the devices in the pool are storing roughly equivalent amounts of data, this is done to increase performance and to reduce rebuild times in case one is required in the future.

rebalance

 

If you want to limit the rebalance rate, it can be throttled by setting a maximum throughput like so:

scli --set_sds_data_copy_limit --limit_rebalance_bandwidth 200

Setting it to zero removes any limit which is the default. It is also important to note that this limit will NOT affect rebuild rates–there is a separate command/setting for that.

 

Cool. Everything is now done! Extra capacity for the storage pool so more volumes can be provisioned. Pretty straight forward process, if you remove the time it takes to initialize the eagerzeroedthick disks this whole process only takes a few minutes.

Stay tuned for the next post on provisioning a new volume to your VMware environment.

Over and out.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.