The latest version of the CSI enabled Pure Service Orchestrator is now available. Snaps and Clones for Persistent Volume Claims enables use cases for K8s clusters to now move data between apps and environments. Need to make instant database copies for dev or test? Super easy now.
Since this feature leverages the capabilities of the FlashArray the clones and snaps have zero performance penalty and only consume globally new blocks on the underlying array (saves a ton of space when you make a lot of copies).
Make sure to read more on the Pure Service Orchestrator github repo on what needs to be done to enable these features in your k8s cluster. See below for more information.
For snapshot feature, ensure you have Kubernetes 1.13+, the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=VolumeSnapshotDataSource=true
For clone feature, ensure you have Kubernetes 1.15+, Ensure the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=VolumePVCDataSource=true
I am excited to be at Kubecon yet again. I think this is my third time. Pure Storage will be in booth S92, come by and see some demos of our CSI plugin. Automating persistent storage is still big need for many K8s clusters. Pure can make it simple, scalable and highly available.
I will be at the booth and around a few sessions so please come and say hello.
Also, ask my all about how Pure will support K8s on VMware in all its various forms.
Sometimes I have to look up information and I think that is so simple I shouldn’t blog about it. Then I think I should share the link so if anyone else finds it, I might be helpful. Today the 2nd one wins.
I just want to note that the alarm comes at like 180 days, which is super nice but the renewed cert is only good for 364 more days. This can not be changed right now. I suggest though for the ease of use renew the certificate before it expires to avoid extra work.
$ git clone --branch <version> https://github.com/purestorage/helm-charts.git
$ cd helm-charts/operator-k8s-plugin
$./install.sh --namespace=pso --orchestrator=k8s -f values.yaml
$ kubectl get all -n pso
NAME READY STATUS RESTARTS AGE
pod/pso-operator-b96cfcfbb-zbwwd 1/1 Running 0 27s
pod/pure-flex-dzpwm 1/1 Running 0 17s
pod/pure-flex-ln6fh 1/1 Running 0 17s
pod/pure-flex-qgb46 1/1 Running 0 17s
pod/pure-flex-s947c 1/1 Running 0 17s
pod/pure-flex-tzfn7 1/1 Running 0 17s
pod/pure-provisioner-6c9f69dcdc-829zq 1/1 Running 0 17s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/pure-flex 5 5 5 5 5 <none> 17s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pso-operator 1/1 1 1 27s
deployment.apps/pure-provisioner 1/1 1 1 17s
NAME DESIRED CURRENT READY AGE
replicaset.apps/pso-operator-b96cfcfbb 1 1 1 27s
replicaset.apps/pure-provisioner-6c9f69dcdc 1 1 1 17s
Sample deployment you can copy this all to a file called deployment.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim-rwx labels: app: minio spec: storageClassName: pure-file accessModes: - ReadWriteMany resources: requests: storage: 101Gi --- apiVersion: apps/v1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-deployment spec: selector: matchLabels: app: minio strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio spec: # Refer to the PVC created earlier volumes: - name: storage persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pv-claim-rwx containers: - name: minio # Pulls the default Minio image from Docker Hub image: minio/minio:latest args: - server - /storage env: # Minio access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 hostPort: 9000 # Mount the volume into the pod volumeMounts: - name: storage mountPath: "/storage" --- apiVersion: v1 kind: Service metadata: name: minio-service spec: type: LoadBalancer ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: minio
Now apply the file to the cluster
# kubectl apply -f deployment.yaml
Check the pod status
$ kubectl get pod
NAME READY STATUS RESTARTS AGE minio-deployment-95b9d8474-xmtk2 1/1 Running 0 4h19m pure-flex-9hbfj 1/1 Running 2 3d4h pure-flex-w4fvq 1/1 Running 1 3d23hpure-flex-zbqvz 1/1 Running 1 3d23h pure-provisioner-dd4c4ccb7-dp76c 1/1 Running 7 3d23h
Check the PVC status
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minio-pv-claim-rwx Bound pvc-04817b75-f98b-11e9-8402-005056a975c2 101Gi RWX pure-file 4h19m
Learn more about PKS and Pure Storage with these posts: Getting started with Persistent Storage and PKS
To get started installing PSO with your PKS cluster using helm follow these instructions. Before installing PSO the Plan in Enterprise PKS must have the “allow privileged” box checked. This setting allows the access to mount storage.
Scroll way down…
Apply the settings in the Installation Dashboard and wait for them to finish applying.
Create a cluster. Go get a Chick-fil-a Biscuit.
# pks create-cluster testcluster -e test.domain.local -p small
This is the quickest method to getting PSO up and running. We are not adding any packages to the PKS Stem. NFS is built in therefore supported out of the box by PKS.
Installing PSO for FlashArray
Before deploying the PKS Cluster you must tell Bosh director to install a few things at runtime.
This is the same method used by other vendors to add agents and drivers to PKS or CloudFoundry.
Once you finish with the intructions you will have PSO able to mount both FlashArray and FlashBlade using their respective StorageClass, pure-block or pure-file.
Please pay attention to networking
PKS does not allow for the deployment to add another NIC to the vm’s that are deployed. With PKS and NSX-T this is also all kept behind logical routers. Please be sure that VM’s have access. I would prefer no firewall and no routing from a VM to the storage, this may not be possible. You may be able to use VLANS to reduce this routing to a minimum. Just be sure to document your full network path from VM to Storage for future reference.
Using PSO
Sample deployment you can copy this all to a file called deployment.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim-rwx labels: app: minio spec: storageClassName: pure-file accessModes: - ReadWriteMany resources: requests: storage: 101Gi --- apiVersion: apps/v1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-deployment spec: selector: matchLabels: app: minio strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio spec: # Refer to the PVC created earlier volumes: - name: storage persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pv-claim-rwx containers: - name: minio # Pulls the default Minio image from Docker Hub image: minio/minio:latest args: - server - /storage env: # Minio access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 hostPort: 9000 # Mount the volume into the pod volumeMounts: - name: storage mountPath: "/storage" --- apiVersion: v1 kind: Service metadata: name: minio-service spec: type: LoadBalancer ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: minio
Now apply the file to the cluster
# kubectl apply -f deployment.yaml
Check the pod status
$ kubectl get pod
NAME READY STATUS RESTARTS AGE minio-deployment-95b9d8474-xmtk2 1/1 Running 0 4h19m pure-flex-9hbfj 1/1 Running 2 3d4h pure-flex-w4fvq 1/1 Running 1 3d23h pure-flex-zbqvz 1/1 Running 1 3d23h pure-provisioner-dd4c4ccb7-dp76c 1/1 Running 7 3d23h
Check the PVC status
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
minio-pv-claim-rwx Bound pvc-04817b75-f98b-11e9-8402-005056a975c2 101Gi RWX pure-file 4h19m
Learn more about PKS and Pure Storage with these posts: Getting started with Persistent Storage and PKS
The last few months I have done a lot of work with NSX-T. I have not done so much networking since my CCNA days. I wanted to share a couple of things that were really helpful I found out on the web.
I was using NSX-T 2.4.2 and sometimes some troubleshooting guides were not very helpful as they were very specific to other versions.
Some helpful information in those few links. Main thing is when you create certificates for NSX-T Manager, you should apply them too.
Also, make sure the NIC’s on your ESXi hosts are all setup the same way. I had 4 nics and 4 different VLAN/Trunk configs, no bueno. Also as VXLAN wants the frames to be at least 1600 MTU. I set everything to 9000 just for fun. That worked much better.
This all started as I was needing a side project. I had purchased a Raspberry Pi 4 in July but was looking for a great way to use it. Then in August I received another Pi 3 from the vExpert Community at VMworld.
I setup the Pi 3 to be an AirPlay speaker for my old basement stereo. What does this have to do with K8s? Nothing.
I took the Pi 4 and purchased 3 more to complete a mini-rack cluster using K3s. https://k3s.io/ this is a crazy easy way to get Kubernetes up and running when you really don’t want to mess with the internals of everything. Perfect for the raspberry pi.
So I know have a single master cluster with 3 worker nodes. Although the master can run workload too… so actually. Four node cluster is best way to describe it.
First was a multi-node deployment of Minio to front end my ancient Iomega Nas. I wrote some Python to take timelapse photos from my PiZero camera and push them into Minio. Pretty cool and should work with any S3 interface (hint hint).
Next was I wanted to make something that could help me do a little more with Python. So I took a look at Tweepy and created a twitter developer account. @Jonbot17 was born.
Take a look at my github page for the code so far.
UPDATE: My bot wasn’t just shadow banned but banned banned. So it would retweet any tweet with #PureAccelerate, then the conference started, the account did a little too much activity for twitter. I guess 1000 tweets in a few hours is too much for the platform.
Does anyone have any other ideas of what I should run on my k3s’s and Pi4 cluster?
There was a question on twitter and I thought I would write down my process for others to learn from. First, a little background. Kubernetes is managed mostly using a tool called kubectl (kube-control, kube-cuddle, kube-C-T-L, whatever). This tool will look for the configuration to talk to the API for kubernetes management. A sanitized sample can be seen by running:
You can see there is Clusters, Contexts and Users. The following commands kubectl config get-context and use-context allow you to see and switch contexts. In my use case I have a single context per cluster.
kubectl config get-context
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* I-AM-GROOT@k8s-ubt18 k8s-ubt18 I-AM-GROOT
k8s-dev-1-admin@k8s-dev-1 k8s-dev-1 k8s-dev-1-admin
k8s-lab-1-admin@k8s-lab-1 k8s-lab-1 k8s-lab-1-admin
k8s-prod-1-admin@k8s-prod-1 k8s-prod-1 k8s-prod-1-admin
kubectl config use-context k8s-dev-1-admin@k8s-dev-1
Switched to context "k8s-dev-1-admin@k8s-dev-1".
Switching this way became cumbersome. So I now use a tool called kubectx and with it kubens. https://github.com/ahmetb/kubectx. Now you can see below my prompt shows my cluster + the namespace. Pretty sweet to see that and has saved me from removing deployments from the wrong cluster. “k8s-dev-1-admin@k8s-dev-1:default”
Now the kubectl tool will look in your environment for a variable KUBECONFIG. Many times this will be set to KUBECONFIG=~/.kube/config . If you modify your .bash_profile on OSX or .bashrc in Ubuntu(and others) you can point that variable anywhere. I formerly had this pointed to a single file for each cluster. For example:
This worked great but a few 3rd party management tools had issues switching between multiple files. At least for me the big one was the kubernetes module for python. So I moved to doing a single combined config file at ~/.kube/config
Now what do I do now?
Here is my basic workflow. I don’t automate it yet as I don’t want to overwrite something carelessly. 1. Run an ansible playbook that grabs the admin.conf file from /etc/kubernetes on the masters of the cluster. 2. Modify manually the KUBECONFIG environment variable to be KUBECONFIG=~/.kube/config:~/latestconfig/new.config 3. Run kubectl config view –raw to make sure it is all there the –raw tag unhides the keys and such. 4. COPY the ~/.kube/config to ~/.kube/config.something 5. Run kubectl config view –raw > ~/.kube/config 6. Open a new terminal to use my original env variable for KUBECONFIG and make sure all the clusters show up. 7. Clean up old config if I am feeling extra clean.
Not really hard or too complicated. I destroy clusters pretty often so sometimes I will blow away the config and then remerge my current clusters into a new config file.
Only a slight nudge at from @CodyHosterman to put this post together.
Kubernetes deployed into AWS is a method many organizations are using to get into using K8s. Whether you deploy K8s with Kubeadm, Kops, Kubespray, Rancher, WeaveWorks, OpenShift, etc the next big question is how do I do persistent volumes? While EBS has StorageClass integrations you may be interesting in getting better efficiency and reliability than traditional block in the cloud. That is one of the great uses of Cloud Block Store. Highly efficient and highly reliable storage built for AWS with the same experience as the on prem FlashArray. By utilizing Pure Service Orchestrator’s helm chart or operator you can now take advantage of Container Storage as a Service in the cloud. Are you using Kubernetes in AWS on EC2 and have questions about how to take advantage of Cloud Block Store? Please ask me here in the comments or @jon_2vcps on twitter.
Persistent Volume Claims may will not always be 100% full. Cloud Block Store is Deduped, Compressed and Thin. Don’t pay for 100% of a TB if it is only 1% full. I do not want to be in the business of keeping developers from getting the resources they need, but I also do not want to be paying for when they over-estimate.
Migrate data from on prem volumes such as K8s PVC, VMware vVols, Native physical volumes into the cloud and attach them to your Kubernetes environment. See the youtube demo below for an example. What we are seeing in the demo is creating an app in Kubernetes on prem, loading it with some data (photos), replicating that application to the AWS cloud and using Pure Service Orchestrator to attach the data to the K8s orchestrated application using Cloud Block Store. This is my re-working of Simon’s tech preview demo from the original launch of Cloud Block Store last November.
3. Simple. Make storage simple. One common tweet I see on twitter from the Kubernetes detractors is how complicated Kubernetes can be. Pure Service Orchestrator makes the storage layer amazingly simple. A single command line to install or upgrade. Pooling across multiple devices.
Get Started today: Below I will include some links on the different installs of PSO. Now don’t let the choices scare you. Container Storage Interface or CSI is the newest API for common interaction with all storage providers. While flexvol was the original storage solution it makes sense to move forward with CSI. This is very true for newer versions of kubernetes that include CSI by default. So if you are starting to use K8s for the first time today or your cluster is K8s 1.11 we have you covered. Use the links below to see the install process and prerequisites for PSO.
While I discussed in my VMworld session this week some of the architectural decisions to be made while deploying PKS on vSphere my demo revolved around once it is up and running how to move existing data into PKS.
First, using the Pure FlashArray and vVols we are able to automate that process and quickly move data from another k8s cluster into PKS. It is not limited to that but this is the use case I started with.
Part 1 of the demo shows taking the persistent data from a deployment on and cloning it over the vVol that is created by using the vSphere Cloud Provider with PKS. vVols are particularly important because they keep the data in a native format and make copy/replication and snapshotting much easier.
Part 2 is the same process just scripted using Python and Ansible.
Demo Part 1 – Manual process of migrating data into PKS
Demo Part 2 – Using Python and Ansible to migrate data into PKS