This is part 6 of this 8 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this.
The series being:
- Mounting an unresolved VMFS
- Why not force mount?
- Why might a VMFS resignature operation fail?
- How to correlate a VMFS and a FlashArray volume
- How to snapshot a VMFS on the FlashArray
- How to mount a VMFS FlashArray snapshot
- Restoring a single VM from a FlashArray snapshot
Using vCenter and our Web Client plugin, recovering a snapshot is a pretty straight forward process. So the pre-requisite here is having our Web Client plugin installed and configured. Info on that here. If you want to know the manual steps, scroll down further and the whole process is described in detail that does not use the plugin–just our GUI and vCenter. Continue reading VMFS Snapshots and the FlashArray Part VII: Mounting a FlashArray VMFS Snapshot
So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:
- They disabled automatic UNMAP on the VMFS for some reason
- They need to get space back quickly and don’t have time to wait
When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading Issue with Manual VMFS-6 UNMAP and Block Count
Another UNMAP post. I was working on updating my best practices script the other day and I realized a lot of UNMAP configuration from a PowerCLI standpoint was not well documented, especially for the vSphere 6.5 stuff which introduces automatic UNMAP to VMFS. Automatic UNMAP is great. But what if someone turns it off? Or what if, for some reason, I want to disable it? Or I want to make sure it is on? Well there are a lot ways to do this–so let’s look at PowerCLI.
Continue reading Managing In-Guest UNMAP and Automatic VMFS-6 UNMAP with PowerCLI
Another UNMAP post, are you shocked? A common question that came up was what volumes have dead space? What datastores should I run UNMAP on?
My usual response was, well it is hard to say. Dead space is introduced when you move a VM or you delete one. The array will not release the space until you either delete the physical volume, overwrite it, or issue UNMAP. Until vSphere 6.5, UNMAP for VMFS was not automatic. You had to run a CLI command to do it. So that leads back to the question, well I have 100 datastores, which ones should I run it on?
So to find out, you need to know two things:
- How much space the file system reports as currently being used.
- How much space the array is physically storing for the volume hosting that file system.
Continue reading Detecting what FlashArray VMFS Volumes Have Dead Space
A question recently came up on the Pure Storage Community Forum about VMFS capacity alerts that said, to paraphrase:
“I am constantly getting capacity threshold (75%) alerts on my VMFS volumes but when I look at my FlashArray volume used capacity it is nowhere near that in used space. What can I do to make the VMware number closer to the FlashArray one so I don’t get these alerts?”
This comment really boils down to what is the difference between these numbers and how do I handle it? So, let’s dig into this. Continue reading VMFS Capacity Monitoring in a Data Reducing World