Quick post. I did some light board videos together on vSphere Virtual Volumes. Lightboard videos are pretty fun to do, the unfortunate part is that I have horrible hand writing. So I immediately apologize for that.
A common question I get with these videos is how do you write backwards. I don’t. I am nowhere near that skilled, as you can see I can barely write forwards. I write normally which appears backwards and the video team mirrors the video.
This is a three part series, the entire playlist can be found here:
Ahem, I suppose I will prove it out. The real answer is, well maybe. Depends on the array.
So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin. And therefore many users opted to not use thin virtual disks because of it.
Virtual Volumes provide a great many benefits, some large, some small. Depending on the VM, recovering a deleted VM could be either of those.
With traditional VMFS, once you have selected “delete from disk” restoring that VM could have been a process. Either restoring from backup or hoping you had a snapshot of the VMFS on the array. Either way, you are probably going to incur data loss, as the last backup or snapshot is unlikely to be from the time right before the deletion.
Let me be VERY clear here. Regardless to the rest of this post, I am not saying once you move to VVols you do not need backup! You absolutely still do. VVols just give you a nice way to do an immediate recovery of the latest point-in-time without having to lose anything, assuming your array support it.
“Wait, did you say delete VM “AD” or VM “80”?”
“Um… definitely not AD that’s our active directory…”
As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.
Just recently, Rubrik announced their integration with the FlashArray to help backup virtual machines and avoid the common performance penalty incurred during VMware snapshot consolidation. See their announcement here.
This is a blog post I have been waiting to write for quite some time. I cannot even remember exactly how long ago I saw Satyam Vaghani present on this as a concept at VMworld. Back when the concept of what is now called a protocol endpoint (more on that later) was called an I/O Demultiplexer. A mouthful for sure. Finally it’s time! With pleasure, I’d like to introduce VVols on the FlashArray!
So over the past two years or so I have been talking up vRealize Orchestrator quite a bit. And a fair amount of that conversation was based on the eventual usage of vRealize Automation. While I certainly feel vRA is a GREAT use case for vRO, the usefulness of vRO does not in any way require vRA.
A common question I get is, “hey can you add this feature to the official FlashArray Plugin?”. The answer is often “maybe” or “eventually” but sometimes even “no”. The plugin is centered at the satisfying the majority and therefore sometimes does not exactly meet your requirements.
So with these two things in mind, what is the connection? Well, using vRO (which is FREE when you have vCenter) you can easily build your own. Especially when you install the FlashArray vRO plugin.
I see a couple advantages here:
Start learning vRO. Using default workflows so you don’t have to “code” anything. Then start with some more customization as you become familiar.
Provide tailored workflows in the vSphere Web Client
Interface-agnostic workflows. As you move forward and use the HTML-5 interface, or vRA you don’t have to redo your work.
A customer pinged me the other day and said they could not see a volume on their ESXi host. Running ESXi version 6.5. All of the normal stuff checked out, but the volume was nowhere to be seen. What gives? Well it turned out to be the LUN ID was over 255 and ESXi couldn’t see it. Let me explain.
The TLDR is ESXi does not support LUN IDs above 255 for your average device.
UPDATE (8/15/2017) I have been meaning to update this post for awhile. Here are the rules:
ESXi 6.5 does support LUN ID higher than 255, but only if those addresses are configured using peripheral LUN addressing. If your array uses flat addressing, it will not work (which is common for higher level LUN IDs).
ESXi 6.7 does now support flat LUN addressing so this problem goes away entirely.
See this post for more information on ESXi 6.7 flat support.
*It’s not actually aliens, it is perfectly normal SCSI you silly man.
I updated my UNMAP PowerCLI script a month or so ago and improved quite a few things–but I did remove hard-coded variables and replaced it with interactive input. Which is fine for some, but for many it was not.
Note: Move to VMFS-6 in vSphere 6.5 and you don’t have to worry about this UNMAP business anymore 🙂
Essentially, quite a few people want to run it as a scheduled task in Windows, and if it requires input that just isn’t going to work out of the box. So I have created an unattended version of the script. For details read on.
Note: I will continue to update the script (bugs, features, etc.) but will note them on my other blog post about the script here:
The FlashArray Storage Replication Adapter for VMware Site Recovery Manager supports many:many replication since the 2.0 release of the SRA. Use of test failover, failover and reprotect is no different than with 1:1, and nor is the setup of the volumes. The only real difference is how you configure the array managers in SRM. So let’s review how this is done.