When Mr. Top10 vBlogger mentions you and your VMworld Session. It is appropriate to always say thank you. If you are interested in what is going on with Pure Storage at VMworld be sure to read through Cody’s post to see all of our sessions. I will have some demos in the booth of Kubernetes on VMware vSphere with PKS (and more). So please be sure to come by and check them out.
Tag: kubernetes
Get going with MicroK8s
Last week I was getting stickers from the Ubuntu booth during the Open Infrastructure Conference in Denver. I asked a sorta dumb question, since this was a so new to me. My very first Open Infra Conference (formerly OpenStack Summit). I was asking a lot of questions.
I saw a sticker for MicroK8s (Micro-KATES).
Me: What is that?
Person in Booth: Do you know what MiniKube is?
Me: Yes.
Person in Booth: It is like that, but from the Ubuntu Opinionated version.
Me: Ok, cool, my whole lab is Ubuntu, except when it isn’t. So I’ll try it out.
Ten minutes later? Kuberenetes is running on my Ubuntu 16.04 VM.
Go over to https://microk8s.io/ to get the full docs.
Want a quick lab?
snap install microk8s --classic
microk8s.kubectl get nodes
microk8s.kubectl get services
Done. What? What!
So this was slightly annoying to me to type microk8s.blah for everyhing. So alias that if you don’t already have kubectl. I didn’t, this was a fresh VM.
snap alias microk8s.kubectl kubectl
You can run this command to push the config into a file to be used elsewhere.
microk8s.kubectl config view --raw > $HOME/.kube/config
Want the Dashboard? Run this:
microk8s.enable dns dashboard
It took my 5 minutes to get to this point. Now I am like OK lets connect to some Pure FlashArrays.
First we need enable priveleged containers in MicroK8s. Add this line to the following 2 config files.
–allow-privileged=true
# kubelet config
sudo vim /var/snap/microk8s/current/args/kubelet
#kube-apiserver config
sudo vim /var/snap/microk8s/current/args/kube-apiserver
Restart services to pick up the new config:
sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service
Now you can install helm, and run the Pure Service Orchestrator Helm chart.
More info on that here:
What’s New in Pure Service Orchestrator?
This week (April 16, 2019), Pure released the 2.4.0 version of the Pure Service
- PSO Operator is now the preferred install method for PSO on OpenShift 3.11 and higher versions.
The PSO Operator packages and deploys the Pure Service Orchestrator (PSO) on OpenShift for dynamic provisioning of persistent volumes on FlashArrays and FlashBlades. The minimum supported version is OpenShift 3.11.
This Operator is created as a Custom Resource Definition from the pure-k8s-plugin Helm chart using the Operator-SDK.
This installation process does not require Helm installation. - Added flasharray.iSCSILoginTimeout parameter with default value of 20sec.
- Added flasharray.iSCSIAllowedCIDR parameter to list CIDR blocks allowed as iSCSI targets. The default value allows all addresses.
flexPath
config parameter location invalues.yaml
has been moved from version 2.2.1 from underorchestrator
field. Upgrading from version earlier than 2.3.0, needs change tovalues.yaml
to use the new location offlexPath
for PSO to work.
Some Highlights
The Operator is a big change for the install process. We are not leaving or abandoning Helm. I love Helm. Really. This was for our customers that do not allow Helm to run in their environments. Mainly the Tiller pod ran with more permissions than many security teams were comfortable with. Tillerless Helm is coming if you are worried now. The Operator will be the
The flexPath: changing places in the values.yaml is good to know. We wanted to make that setting a
Last but not least, the iSCSIAllowedCIDR limits the iSCSI targets PSO will have the worker node log into during the Persistent Volume mount process. This is important to environments that may serve many different clusters with their own iSCSI networks. The iSCSI interfaces on a FlashArray can be divided with
Another Kickoff and a New Year
November 2018 was my the finish of my 5th year at Pure. I really meant to write up a recap but let’s just say November and December were super busy.
I was in Barcelona for VMworld EMEA the beginning of November, then came home to visit more customers around the US and tell them about using PSO with Kubernetes and Docker. Then my amazing oldest daughter had a soccer tournament in Orlando, FL. It was a great time with the family and why I do what I do.
Then back out to AWS
January was about building out some content for our sales and company kickoff but also helping customers with their projects on K8s and Docker. That brings me to yet another Kickoff. What I call the Orangest show on Earth. A chance for me to see so many great friends and see how successful their last year was. It was very satisfying to see sales reps and SE’s that I worked with throughout the year get recognized for growth they brought to the company. It was very nice to be recognized by my leadership and peers with an award. When you work with such a wide range of regions and teams sometimes gets hard to see if you are making a difference, especially when you are remote like I am. At the beginning of 2018, almost no one at Pure knew what I was working on. Slowly but surely the excitement around K8s is growing, so I am looking forward to an even more exciting year here at Pure.
Somethings I would like to do in 2019
- Share more on the blog. The transition from VMware(I still do VMware stuff!) to Kubernetes has provided many learning opportunities for me to share.
- Work on Clusters as Cattle with Persistent data. Data is important and the app/cluster can or should move around it. Seamlessly.
- Finish some cloud/dev online classes I have started. Finding time with no distractions is key here.
New Pure Service Orchestrator Demo
You may want to make this full screen to see all the CLI glory.
What you will see in this demo is the initial install of Pure Service Orchestrator on
I would love to hear what you think of this and any other ways I can show this off to enable
Kubecon 2018 Seattle Pure Storage – also We are hiring
I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.
It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.
- Send me a message on twitter @jon_2vcps
- Find me at the Pure Booth
- Stop me in the hall between sessions.
I look just like one of the following people:
Pure Service Orchestrator Guide
Over the last few months I have been compiling information that I have used to help customers when it comes to PSO. Using Helm and PSO is very simple, but with so many different ways to setup K8s right now it can require a broad knowledge of how plugins work. I will add new samples and work arounds to this Github repo as I come across them. For now enjoy. I have the paths for volume plugins for Kubespray, Kubeadm, Openshift and Rancher version of Kubernetes. Plus some quota samples and even some PSO FlashArray Snapshot and clone examples.
https://github.com/2vcps/PSO-Guide
A nice picture of some containers because it annoys some people, that makes me think it is funny.
Storage Quotas in Kubernetes
One thing since we released Pure Service Orchestrator I get asked is, “How do we control how much developer/user can deploy?”
I played around with some of the settings from the K8s documentation for quotas and limits. I uploaded these into my gists on GitHub.
git clone git@gist.github.com:d0fba9495975c29896b98531b04badfd.git
#create the namespace as a cluster-admin
kubectl create -f dev-ns.yaml
#create the quota in that namespace
kubectl -n development create -f storage-quota.yaml
#or if you want to create CPU and Memory and other quotas too
kubectl -n development create -f quota.yaml
This allows users in that namespace to be limitted to a certain number of Persistent Volume Claims (PVC) and/or total requested storage. Both can be useful in scenarios where you don’t want someone to create 10,000 1Gi volumes on an array or create one giant 100Ti volume.
VMworld 2018 in Las Vegas
I was going to write my own post, but Cody Hosterman already did a great one.
Cody’s VMworld 2018 and Pure Storage Blog
The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.
Getting Started with Pure Service Orchestrator and Helm
Why Pure Service Orchestrator?
At Pure we have been working hard to develop a way to provide a persistent data layer that is able to meet the expectations of our customers for ease of use and simplicity. The first iteration of this was the release as the Docker and Kubernetes Plugins.
The plugins provided automated storage provisioning. Which solved a portion of the problem. All the while, we were working on the service that resided within those plugins. A service that would allow us to bring together managing many arrays. Both block and file.
The new Pure Service Orchestrator will allow smart provisioning over many arrays. On-demand persistent storage for developers placed on the best array or adhering to your policies based on labels.
To install you can use the traditional shell script as described in the readme file here.
The second way that may fit into your own software deployment strategy is using Helm. Since using Helm provides a very quick and simple way to install and it may be new to you the rest of this post will be how to get started with PSO using Helm.
Installing Helm
Please be sure to install Helm using the correct RBAC intructions.
I describe the process in my blog here.
http://54.88.246.86/2018/03/27/getting-started-with-helm-for-k8s/
Also, get acquainted with the official Helm documentation at the following site:
https://docs.helm.sh/using_helm/
Once Helm is fully functioning with your Kubernetes cluster run the following commands to setup and Pure Storage Helm repo:
helm repo add pure https://purestorage.github.io/helm-charts
helm repo update
helm search pure-k8s-plugin
Additionally, you need to create a YAML file with the following formate and contents:
arrays: FlashArrays: - MgmtEndPoint: "1.2.3.4" APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb" Labels: rack: "22" env: "prod" - MgmtEndPoint: "1.2.3.5" APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb" FlashBlades: - MgmtEndPoint: "1.2.3.6" APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135" NFSEndPoint: "1.2.3.7" Labels: rack: "7b" env: "dev" - MgmtEndPoint: "1.2.3.8" APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135" NFSEndPoint: "1.2.3.9" Labels: rack: "6a"
You can run a dry run of the installation if you want to see the output but not change anything on your cluster. It is important to remember the path to the yaml file you created above.
helm install --name pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --dry-run --debug
If you are satisfied with the output of the dry run you can run the install now.
helm install --name pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml
Please check the GitHub page hosting the Pure Storage repo for more detail.
https://github.com/purestorage/helm-charts/tree/master/pure-k8s-plugin#how-to-install
Setting the Default StorageClass
Since we do not want to assume you only have Pure Storage in you environment we do not force ‘pure’ as the default StorageClass in Kubernetes.
If you already installed the plugin via helm and need to set the default class to pure run this command.
kubectl patch storageclass pure -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
If you have another storage class set to default and you wish to change it to Pure you must first remove the default tag from the other StorageClass and then run the command above. Having two defaults will produce undesired results. To remove the default tag run this command.
kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Read more about these commands from the K8s documentation.
https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
Demo
Maybe you are a visual learner check out these two demos showing the Helm installation in action.
Updating your Array information
If you need to add a new FlashArray or FlashBlade simply add the information to your YAML file and update via Helm. You may edit the config map within Kubernetes and there are good reasons to do it that way, but for simplicity we will stick to using helm for changes to the array info YAML file. Once your file contains the new array or label run the following command.
helm upgrade pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --set ...
Upgrading using Helm
With the same general process you can use the following command and update the version of Pure Service Orchestrator.
helm upgrade pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --version <target version>
Upgrading from the legacy plugin to the Helm version
Follow the instructions here:
There are a few platform specific considerations you should make if you are using any of the following.
- Containerized Kubelet (Some flavors of K8s do this, Rancher and Openshift are two).
- CentOS/RHEL Atomic Linux
- CoreOS
- OpenShift
- OpenShift Containerized Deployment
Be certain to read through the notes if you use any of these platform versions.