So one of our field engineers reached out to me because they had a power outage of some sort and their vCenter appliance failed to boot with these errors about starting up the services.
So this is a common that has happened many times as seen from these KBs and community posts:
Simple blog post, and this is really more for my own future reference. But could help you if you don’t have the common mount problems in the forums etc.
Continue reading VCSA 6.5 Fails to Boot
I have been traveling around lately talking about VVols and one of the most commonly misunderstood objects is the VVol datastore. What is it? What does capacity mean on it? Why does it even exist?
These are all good questions. The great thing about VVols is that very little changes in how the VMware user interacts with vSphere. But at the same time, what actual happens is VERY different. So let’s work through this.
Continue reading What is a VVol Datastore?
Having consistent LUN IDs for volumes in ESXi has historically been a gotcha–though over time this requirement went away.
These days, the need for consistent LUN IDs is mainly gone. The lingering use case for this is Microsoft Clustering and how persistent SCSI reservations are handled. The below KB doesn’t mention 6.5, but I believe it to still be relevant to 6.5:
https://kb.vmware.com/s/article/2054897 Continue reading Issue with Consistent LUN ID in ESXi 6.5
This is the start of many blog posts around the recent Purity 5.0 release. I figured I would start with one that doesn’t require an upgrade of Purity to even get!
Alongside Purity 5.0, we released the 3.0 version of theFlashArray plugin for the vSphere Web Client. This is bundled in Purity 5.0, so if you upgrade any one of your FlashArrays you can then use it to upgrade the plugin in one or all of your vCenters.
Let me be clear though–if you want to use VVols or ActiveCluster you need Purity 5.0. Without Purity 5.0, you can use the 3.0 plugin of course, but you can only use non-VVol or non-ActiveCluster features.
Continue reading FlashArray 3.0 Plugin for the vSphere Web Client
A new ESXi 6.5 patch came out today:
And I wanted to upgrade my whole lab environment to it and I haven’t set up auto-deploy or update manager yet (I plan to, making all of this much easier to manage). So I wrote a quick and dirty PowerCLI script that updates to the latest patch and if the host doesn’t have any VMs on it, puts it into maintenance mode and reboots it. I will reboot the other ones as needed.
So short, not really even worth throwing on GitHub, but I might make it cleaner, and smarter at some point and put it there. Continue reading Upgrading ESXi environment with PowerCLI
One of the most strategic benefits of Virtual Volumes is how it opens up your data mobility. Because there is no more VMDK encapsulation, VVols are just block volumes with whatever file system your guest OS in the VM puts on it. So a VVol is really just a volume hosting NTFS, or XFS or whatever. So if a target can read that file system, it can use that VVol. It does not have to be a VMware VM.
Let me start out with: YES our VVols deployment will be GA VERY soon. I am sorry (but not really) for continuing to tease VVols here.
This is one of the reasons we do not treat VVols on the FlashArray any differently than any other volume–because they aren’t different! So there is no reason you can’t move the data around. So why block it??
Some possibilities this function opens us:
- Take a RDM and make it a VVol
- Take a VVol and present it to an older VMware environment as a RDM
- Take a VVol and present it, or a copy of it, to a physical server.
- On the FlashArray we are also introducing something called CloudSnap, which will let you take snapshots of volumes (aka VVols) and send them to NFS, or S3 to be brought up as a EBS volume for an EC2 instance.
Continue reading VVol Data Mobility: Data from Virtual to Physical
I have blogged a decent amount recently about VVols and in many of those posts I mention config VVols. When using vSphere Virtual Volumes, VMs have one, some, or all of the following VVols types:
- Data VVols–every virtual disk you add creates a data VVol on your array
- Swap VVol–when you power on a VVol-based VM, a swap VVol is created. When you power it off, this is deleted.
- Memory VVol–When you create a snapshot and store the memory state or when you suspend a VM, this is created.
- Config VVol–represents a folder on a VVol datastore.
This statement about config VVols deserves a bit more attention I think. What does that really mean? Understanding config VVols is important when it comes to recovery etc. So let’s dig into this.
Continue reading What is a Config VVol Anyways?
Quick post. I did some light board videos together on vSphere Virtual Volumes. Lightboard videos are pretty fun to do, the unfortunate part is that I have horrible hand writing. So I immediately apologize for that.
A common question I get with these videos is how do you write backwards. I don’t. I am nowhere near that skilled, as you can see I can barely write forwards. I write normally which appears backwards and the video team mirrors the video.
This is a three part series, the entire playlist can be found here:
Continue reading VVol Lightboard Videos
Migrating VMDKs or virtual mode RDMs to VVols is easy: Storage vMotion. No downtime, no pre-creating of volumes. Simple and fast. But physical mode RDMs are a bit different.
As we all begrudgingly admit there are still more than a few Raw Device Mappings out there in VMware environments. Two primary use cases:
- Microsoft Clustering. Virtual disks can only be used for Failover Clustering if all of the VMs are on the same ESXi hosts which feels a bit like defeating the purpose. So most opt for RDMs so they can split the VMs up.
- Physical to virtual. Sharing copies of data between physical and virtual or some other hypervisor is the most common reason I see these days. Mostly around database dev/test scenarios. The concept of a VMDK can keep your data from being easily shared, so RDMs provide a workaround.
Continue reading Moving from an RDM to a VVol
Yes. Any questions?
Ahem, I suppose I will prove it out. The real answer is, well maybe. Depends on the array.
So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin. And therefore many users opted to not use thin virtual disks because of it.
So first off, why the difference?
Continue reading Do thin VVols perform better than thin VMDKs?