This morning I needed to upgrade one of my dev clusters to 1.17.4. I decided to capture the experience. Don’t worry I speed up the ansible output flying by
I use Kubespray to deploy and upgrade my clusters. I didn’t do anything really to prepare. All of my clusters I can rebuild pretty easy from Terraform if anything breaks.
git clone git@github.com:kubernetes-sigs/kubespray.git
cd kubespray
## Make sure you copy your actual inventory. For more information see the kubespray github repo
ansible-playbook -i inventory/dev/inventory.ini -b -v upgrade-cluster.yaml
Watch it go for about 40 minutes in my case. Remember this is a dev cluster and the pods I have running can restart all they want. I don’ t care. Everything upgrades through the first part of the video. Now lets upgrade Pure Service Orchestrator.
Now if you watch the video you will notice I had to add the Pure Storage helm repo. This was a new jump box in the lab. So I had PSO installed just not from this actual host. It is easy to add. More details are in the Pure Helm Chart README.
I build and destroy Kubernetes clusters nearly weekly. Doing it on VMs makes this super easy. I also need to demo Pure Service Orchestrator so having in guest iSCSI is a must. Following this repo should give any vSphere admin an easy way to learn kubectl, helm and PSO quite easily (of course PSO works with Pure FlashArray and FlashBlade). This uses Terraform to create the VM and Kubespray to install k8s. Ansible can also be used for a few automations of package installs and updates.
I am going to try something new and not recreate the github readme and just share the repo link.
I am pretty excited to be doing a webinar with Weaveworks on Weave Kubernetes Platform and Pure Storage. I met Damani at Kubecon and Re:Invent and we have been talking about doing this for months. I am excited to integrate Pure Service Orchestrator and Pure Storage into a platform providing a full collection of what you need to run k8s. Some things we will cover:
How the Weave Kubernetes Platform and its GitOps workflows unify deployment, management, and monitoring for clusters and apps
How Pure Service Orchestrator accelerates application build and delivery with 6 9’s storage uptime. PSO works for ON PREM and Public Cloud
Live Demo – I am going to show some CSI goodness. Promise.
Working Container Registry and your Kubernetes cluster is able to pull images from it.
Jenkins and a service account Jenkins can use to do things in K8s.
Jenkinsfile
Go ahead and fork my repo https://github.com/2vcps/python-twitter-bot to your own github account. Now looking at the Jenkinsfile below inside of the repo. Some things for your to modify for your environment.
Create a serviceAccount to match the serviceAccountName field in the yaml. This is the permissions the pod building and deploying the bot will use to run during the process. If you get this wrong. There will be errors.
make the the images in the file all exist in your private registry. The first image tag you see is used to run kubectl and kustomize. I suggest building this image from the cloud builders public repo. The docker file is here: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/kustomize The second image used is public kaniko image. Now using that specific build is the only way it will function inside of a container. Kaniko is a standalone tool to build container images. Does not require root access to the docker engine like a ‘docker build’ command does. Also notice there is a harbor-config volume that allows kaniko to push to my harbor registry. Please create the secret necessary for your container registry. Also notice the kubectl portion is commented out and is only left behind for reference. The Kustomize image contains both kubetctl and kustomize commands.
Last thing to take note of is the commands kustomize uses to create a new deployment.yaml called builddeploy.yaml. This way we can build and tag the container image each time and the deployements will be updated with the new tag. We avoid using “latest” as that can cause issues and is not best practice.
podTemplate(yaml: """
kind: Pod
spec:
serviceAccountName: jenkins-k8s
containers:
- name: kustomize
image: yourregistry/you/kustomize:3.4
command:
- cat
tty: true
env:
- name: IMAGE_TAG
value: ${BUILD_NUMBER}
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
env:
- name: IMAGE_TAG
value: ${BUILD_NUMBER}
- name: kaniko
image: gcr.io/kaniko-project/executor:debug-539ddefcae3fd6b411a95982a830d987f4214251
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
env:
- name: DOCKER_CONFIG
value: /root/.docker/
- name: IMAGE_TAG
value: ${BUILD_NUMBER}
volumeMounts:
- name: harbor-config
mountPath: /root/.docker
volumes:
- name: harbor-config
configMap:
name: harbor-config
"""
) {
node(POD_LABEL) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
stage('Build with Kaniko') {
container('kaniko') {
sh '/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=yourregistry/you/py-bot:latest --destination=yourregistry/you/py-bot:v$BUILD_NUMBER'
}
}
stage('Deploy and Kustomize') {
container('kustomize') {
sh "kubectl -n ${JOB_NAME} get pod"
sh "kustomize edit set image yourregistry/you/py-bot:v${BUILD_NUMBER}"
sh "kustomize build > builddeploy.yaml"
sh "kubectl get ns ${JOB_NAME} || kubectl create ns ${JOB_NAME}"
sh "kubectl -n ${JOB_NAME} apply -f builddeploy.yaml"
sh "kubectl -n ${JOB_NAME} get pod"
}
}
// stage('Deploy with kubectl') {
// container('kubectl') {
// // sh "kubectl -n ${JOB_NAME} get pod"
// // sh "kustomize version"
// sh "kubectl get ns ${JOB_NAME} || kubectl create ns ${JOB_NAME}"
// sh "kubectl -n ${JOB_NAME} apply -f deployment.yaml"
// sh "kubectl -n ${JOB_NAME} get pod"
// }
// }
}
}
Create a jenkins pipeline and name it however you like, the important part is to set the Pipeline section to “Pipleline script from SCM”. This way Jenkins knows to use the Jenkinsfile in the git repository.
Webhooks and Build Now
Webhooks are what Github uses to push a new build to Jenkins. Due to the constraints of my environment I am not able to do this. My Jenkins instance cannot be contacted by the public API of Github. For now I have to click “Build Now” manually. I do suggest in a fully automated scenario investigating how to configure webhooks so that on every commit you can trigger a new pipeline build. What the build is successful you should see some lovely green stages like below. In this example there are only 2 stages. Build with Kaniko, this builds the container image and pushes to my internal repo (Harbor). Then Deploy and Kustomize, which takes the new image and updates the 3 deployments in my Kubernetes cluster.
So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This is the second post and covers how to build the docker container and deploy to k8s.
Create a secret in your k8s environment with the keys are variables. Side note: this is the only methond I found to not break the keys when storing in K8s. If you have a functioning way to do it better let me know.
edit env-secret.yaml with your keys from twitter and the search terms.
kubectl apply -f env-secret.yaml
Verify the keys are in your cluster.
kuebctl describe secret twitter-api-secret
Step 3
Edit deployment.yaml and deploy the app. In my example I have 3 different deployments and one pvc. If you play to not capture data make sure to change the followback deployment to launch followFollowers.py and not followFollowers_data.py. Addiotionally, remove the PVC information if you are not using it.
Be sure to change the image for each deployemnt to your local repository path. Notice that the autoreply deployment uses the env variable searchkey2 and favretweet deployment will use searchkey1. This allows each app to seach on different terms.
Be careful, if you are testing the favretweet.py program and use a common word for search you will see many many likes and retweets.
Now deploy
kubectl apply -f deployment.yaml
kubectl get pod
NAME READY STATUS RESTARTS AGE
autoreply-df85944d5-b9gs9 1/1 Running 0 47h
favretweet-7758fb86c7-56b9q 1/1 Running 0 47h
followback-75bd88dbd8-hqmlr 1/1 Running 0 47h
kubectl logs favretweet-7758fb86c7-56b9q
INFO:root:API created
INFO:root:Processing tweet id 1229439090803847168
INFO:root:Favoriting and RT tweet Day off. No pure service orchestrator today. Close slack Jon, do it now.
INFO:root:Processing tweet id 1229439112966311936
INFO:root:Processing tweet id 1229855750702424066
INFO:root:Favoriting and RT tweet In Pittsburgh. Taking about... Pure Service Orchestrator. No surprise there. #PSO #PureStorage
INFO:root:Processing tweet id 1229855772789460992
INFO:root:Processing tweet id 1230121679881371648
INFO:root:Favoriting and RT tweet I nearly never repost press releases, but until I can blog on it. @PureStorage and Pure Service Orchestrator join… https://t.co/A6wxvFUUY7
INFO:root:Processing tweet id 1230121702509531137
kuebctl logs followback-75bd88dbd8-hqmlr
INFO:root:Waiting... 300s
INFO:root:Retrieving and following followers
INFO:root:purelyDB
INFO:root:PreetamZare
INFO:root:josephbreynolds
INFO:root:PureBob
INFO:root:MercerRowe
INFO:root:will_weeams
INFO:root:JeanCarlos237
INFO:root:dataemilyw
INFO:root:8arkz
So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This first post is to just go over the python code and how it works and what you need to do to get it working for yourself.
Now it is time to impress your boss and use big words like kubernetes. Read on in the next post below about how to run this bot as a deployment in kubernetes.
Then edit the install.sh and add your credentials and vCenter information.
VCENTER="<vcenter name or IP>" VC_ADMIN="<vc admin>" VC_PASS="<vc password>" VC_DATACENTER="<vc datacentername>" VC_NETWORK="<vc vm network name>"
VMware requires all the master to be tainted this way.
MASTERS=$(kubectl get node --selector='node-role.kubernetes.io/master' -o name)
for n in $MASTERS
do
kubectl taint nodes $n node-role.kubernetes.io/master=:NoSchedule
done
kubectl describe nodes | egrep "Taints:|Name:"
Run the installer shell script (sorry Windows users, install WLS or something)
# ./install.sh
To Remove
Remove all PVC’s created with the Storage Class.
kubectl delete pvc
Then run the cleanup script.
./uninstall.sh
You can run kubectl get all --all-namespaces to verify it is removed.
Note
If the CSI driver for vSphere does not start, the Cloud Controller may not have untainted the nodes when it initialized. I am have seen it work automatically (as designed by VMware) and also had to run this to make it work:
NODES=$(kubectl get nodes -o name)
for n in $NODES
do
kubectl taint nodes $n node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule-
done
kubectl describe nodes | egrep "Taints:|Name:"
Create a new file called cns-vvols.yaml and paste the above yaml. Now you will have the replace the **DatastoreURL** with a datastore that matches your environment. vVols is not currently “supported” but it can work with SPBM policies that point to FlashArrays and have no other policies enabled. Try it out if you like just remember it is not supported and that is why it is commented out.
The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.
Note: Most of this was written as vSphere Cloud Provider in K8s was transitioned to the Cloud Native Storage CSI driver. Use CNS and CSI when you can, the more stable versions of PKS don’t support CSI by default as the K8s version is older.Use PSO for ReadWriteMany on Pure FlashBlade and CNS for block on FlashArray with VMFS Datastores (vVols coming).
The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.
I think it is crucial to understand the options for Storage in Kubernetes first. If you have seen me present in the last 12 months you may have already seen this graphic. When I talk about Hypervisor options I am referring to the vSphere Cloud Provider (and now Cloud Native Storage). At the time you deploy the cluster you provide credentials to contact vCenter and create/manage/destroy persistent volumes on a vSphere Datastore. This option is built into PKS Enterprise. This allows you to create a custom Storage Class per datastore or even use SPBM for provisioning. Yes, that means VMFS and vVols are supported today. There is no validation or certification needed to use this plugin (VCP or CNS) with Pure Storage. Customers of PKS are already doing this today. VCP works very well with vVols. I have advocated at VMworld during my session that this unlocks data mobility options between DIY K8s clusters and PKS (or even between PKS clusters).
How does PSO fit in with PKS? If you require “ReadWriteMany” aka RWX. This means I want many scalable containers to all attach to and use a single Persistent Volume Claim. The Pure Storage FlashBlade handles this use case. If you need simplicity in storage management across multiple devices both file and block PSO can consolidate this into a single orchestration layer deployed to any cluster with single command. PSO will also scale to new devices with a single command. Simplifying what was traditionally a very complex portion of a K8s environment.
Use vSphere Cloud Provider and Pure Service Orchestrator Side-by-Side
Due to the nature of the Storage Classes made by PSO and ones you manually create for VCP/CNS you can provide choice to the end users of PKS with only an initial install effort for the Ops team and nearly zero effort Day 2 onward. For more information please take a look at my post on “Getting Started with PKS” and the “How to install PSO in a PKS Cluster” posts.
The thing that happened was…
So if you follow k8s development at all, you know that CSI became GA at the beginning of 2019. The vSphere Cloud Provider “driver” that is in-tree is now deprecated. VMware is has released Cloud Native Storage, the CSI driver for Kubernetes clusters running on vSphere. This does not change the need for Pure Service Orchestrator for RWX volumes or even possible with in guest iSCSI too. Officially for RWO (read write once) you should be using CNS.
Learn more about PKS and Pure Storage with these posts: Getting started with Persistent Storage and PKS