From my armchair in the past few weeks, I have been watching the myriad of announcements at re:Invent by AWS and a few things caught my eye (well a lot of things did, but a few in particular to storage).
The first thing to note was the change in the consistency model in S3. Up until now, consistency was “eventual” within S3 for certain operations like changes to a file, and there are a ton of posts that do a great job of explaining this. One is below:
Google “S3 eventual consistency” and you will find tons of examples.
At a high level, when modifying or deleting objects the change may not be immediately reflected. So on an immediate subsequent read, you may not get what you wrote. For busy environments with high change rates/modifications this could lead to corruption. So you needed to understand the behavior and build to or around it.
At re:Invent, AWS announced that strong consistency is now supported for S3 operations:
When deployed on Windows, the Pure1 PowerShell Module takes advantage of Windows-based certificates in the user (or specified) certificate store. On Linux or MacOS, it uses RSA private key pairs.
To relocate authentication on a Non-Windows machine to another non-Windows machine, you just copy the private key from wherever it is to the target. For Windows though you need to export the cert (which has a private key) from the certificate store, then you can copy the file to wherever.
In the latest release of the Pure1 PowerShell module (126.96.36.199) there is a new feature to do that for you–or at least simplify the process of exporting the cert with the right settings.
Let’s walk through exporting and then importing the cert. In a future post I will go into some of the other enhancements in this release in more detail.
As always the repo is here (and release notes) and it is best installed/updated via the PowerShell Gallery:
Sounds like a silly thing, but we all have to start somewhere. Generally when I dig into something new, I like to start from a place I know well. So when it comes to using a new API, I like to use a tool I know how to use. Kubernetes–and its API is fairly new to me from a hands-on perspective. PowerShell, however, is not. I have decent handle on that. So seems to me a good place to start with the k8s API.
I don’t know if this is the best way, or even a good way, but it does work. And there is also this:
JSON Web Tokens (JWT) are part of the mechanism that we (and many modern REST implementations) use to authorize connections. I think the term authorize is the key here. Authenticate vs. Authorize. Think of it in a similar way to when you log into a website. You initially login (authenticate) with a website with a user name and password. But the next time you go to it, or re-launch your browser you don’t have to. Why because you already authenticated. An authorization is stored in a cookie so you don’t have to again. For at least a certain amount of time or for the length of that browser session etc.
This is often somewhat abstracted, but not always. If you want to directly authenticate to Pure1, for instance, you need to create a JWT. So let’s dig into that process. Then let’s talk about troubleshooting techniques for a rejected JWT.
The Anatomy of a JWT
So what is in a JWT? Well the data can vary, but in this case I will be talking about the data required by Pure1.
There are three parts:
The headers indicate what type of encryption is used in the signature.
The payload indicates the information required by the authenticator. Expiration. User. Key. Whatever.
The signature is the encrypted string that consists of the header plus the payload data. So an example.
For Pure1, the header looks like so. Always:
Basically saying use RSA 256 bit encryption for this JWT.
The payload is always structured the same, but the data varies:
The iss property is a Pure1 key assigned to an application. The IAT property is the current epoch time, and the exp property is the expiration of this key. So the JWT cannot be used after that time to authorize any more connections.
As you might notice, this information is formatted in JSON. But it is not sent that way. The data is sent via https, so it needs to be what is called URL encoded.
It will create the JWT for you. If that JWT is different than one you generated elsewhere your other JWT was incorrectly created.
You can add in your public key to ensure it is all good:
What to do with a bad JWT
So if you get an authorization error with Pure1 what should you do? Make sure the combination that you are using is correct: right API key, right public key, right private key. Figure out which one is wrong. The simplest thing often is to start over: create a new key pair, add the public one into Pure1, and retry.
The next step here is storage. I want to configure an ability to provision persistent storage in Tanzu Kubernetes. Storage is generally managed and configured through a specification called the Container Storage Interface (CSI). CSI is a specification created to provide a consistent experience in an orchestrated container environment for storage provisioning and management. There are a ton of different storage types (SAN, NAS, DAS, SDS, Cloud, etc. etc.) from 100x that in vendors. Management and interaction with all of them is different. Many people deploying and managing containers are not experts in any of these, and do not have the time nor the interest in learning them. And if you change storage vendors do you want to have to change your entire practice in k8s for it? Probably not.
So CSI takes some proprietary storage layer and provides an API mapping:
Vendors can take that and build a CSI driver that manages their storage but provides a consistent experience above it.
At Pure Storage we have our own CSI driver for instance, called Pure Service Orchestrator. Which I will get to in a later series. For now, lets get into VMware’s CSI driver. VMware’s CSI driver is part of a whole offering called Cloud Native Storage.
This has two parts, the CSI driver which gets installed in the k8s nodes, and the CNS control plane within vSphere itself that does the selecting and provisioning of storage. This requires vSphere 6.7 U3 or later. A benefit of using TKG is that the various CNS components come pre-installed.
So in the previous post, I wrote about deploying a Tanzu Kubernetes Grid management cluster.
VMware defines this as:
A management cluster is the first element that you deploy when you create a Tanzu Kubernetes Grid instance. The management cluster is a Kubernetes cluster that performs the role of the primary management and operational center for the Tanzu Kubernetes Grid instance. This is where Cluster API runs to create the Tanzu Kubernetes clusters in which your application workloads run, and where you configure the shared and in-cluster services that the clusters use.
In other words this is the cluster that manages them all. This is your entry point to the rest. You can deploy applications in it, but not usually the right move. Generally you want to deploy clusters that are specifically devoted to running workloads.
These clusters are called Tanzu Kubernetes Cluster. Which VMware defines as:
After you have deployed a management cluster, you use the Tanzu Kubernetes Grid CLI to deploy CNCF conformant Kubernetes clusters and manage their lifecycle. These clusters, known as Tanzu Kubernetes clusters, are the clusters that handle your application workloads, that you manage through the management cluster. Tanzu Kubernetes clusters can run different versions of Kubernetes, depending on the needs of the applications they run. You can manage the entire lifecycle of Tanzu Kubernetes clusters by using the Tanzu Kubernetes Grid CLI. Tanzu Kubernetes clusters implement Antrea for pod-to-pod networking by default.
The VMs were all deployed and good, but it looks like it did not quite have the permissions to save the information to my local config. This is likely salvageable, but since it is new, just as simple to delete it and re-run with sudo
sudo tkg delete cluster cody-prod
Deletes the VMs and removes the references. The likely solution here is folder permissions, but for now just use sudo with tkg when provisioning new resources.
I wrote awhile back on how to deploy TKG on top of vSphere, but there have been some improvements, some changes, and I have personally learned more so I thought it was time to write a new one.
The process requires a few things, but first the deployment of the management cluster–and there are a few options for this. A burgeoning option is the more integrated version to vSphere, which is called Tanzu Kubernetes Grid Service. This means the supervisor cluster is tightly integrated into vSphere. This comes in two forms vSphere with Tanzu or VMware Cloud Foundation (VCF) with Tanzu. The latter is the most feature rich but of course requires VCF and NSX. The former doesn’t quite have all of the options, but does not require those two, instead just vSphere and virtual distributed switches.
The third option is to deploy the management cluster directly. This has the least requirements, but has the least direct integration into vSphere. This is what I will focus on today. I will follow up with the other options. This choice is generally just called Tanzu Kubernetes Grid.