Use PKS Enterprise on VMware SDDC and Pure Storage

Use PKS Enterprise on VMware SDDC and Pure Storage

Pivotal Container Services (PKS) provides a deeply integrated Kubernetes (k8s) architecture for the VMware SDDC. It is a joint engineering project from VMware and Pivotal. In my conversations with Pure Storage customers or potential customers around Kubernetes I often get asked about how Pure Storage can help a PKS Enterprise environment. The good news is there is a very easy path to utilizing k8s with Pure + VMware + PKS.

The Architecture

Using Pure with PKS is actually very straight forward. Since Pure FlashArray is already leading choice for all VMware environments it is not anything out of the ordinary to support PKS. 

Understanding the underlying technology that integrates PKS into VMware you may soon realize that highly reliable, stateless and shared storage is the best choice when deploying PKS. 

The choice between drivers (shown in the graphic above) to deliver the Storage is up to you. The vSphere Cloud Provider provides automated creation and management of the virtual disks presented to containers in PKS. This supports the use of vVols and enables great possibilities for your PKS environment.  Pure Service Orchestrator utilizes a direct connection to Pure Storage FlashArrays, FlashBlades and Cloud Block Stores. It is installed with a single Helm command or Kubernetes Operator. It includes Smart Provisioning in order to place volumes on the most optimal storage device in your fleet.

The choice of which tool will be dictated by your workload. It is not an exclusive choice either. It is easy to do both. After VMworld I hope to publish the details on how to install PSO on PKS. If you have really good github search foo you may be able to find the bosh deployment.

Highly Reliable

Pure Storage has measured 6×9’s of uptime across its customer base. Many storage solutions for container environments will require hours of planning and weeks of proper implementation to provide high availability. Do not spend time re-architecting your storage infrastructure for PKS. Spend your time delivering k8s to your customers so they can deliver innovation for your business.  Use the Pure Storage devices you already have. You may not even need a whole new dedicated array (don’t tell sales I said that). 

Stateless Arrays for Stateful Data

Migrating data should be eliminated from your daily tasks. As FlashArrays move further into the future where data always stays in place. The ability to keep the data in place for multiple hardware generations is a proven benefit of Pure. Migrating persistent storage in k8s even on VMware is a non-trivial task. Depending on your scale this could take weeks of planning and careful flawless execution to accomplish non-disruptively. The underlying hardware should not be a concern for delivering applications. Pure Storage has made this a reality since the FlashArray debut 7 years ago.

Shared Storage

Delivering highly reliable data across multiple PKS and vSphere clusters, allowing applications to failover if the compute in an availability zone becomes unavailable, is key to delivering a cloud experience for your k8s rollout. While the Pure sales teams would gladly help you acquire a FlashArray per vSphere cluster hosting PKS this is simply un-needed for nearly all situations. Especially as you start on your Kubernetes journey.

But Why PURE?

Simple; vVols on the FlashArray combined with the PKS integration with vSphere enables mobility of data and freedom unavailable on a legacy datastore. Have a group that rolled their own k8s? FlashArray can clone their persistent data instantly into PKS using vVols. Need to copy data from a bare metal (non-VM) k8s cluster to PKS? Pure vVols makes this possible. Have multiple k8s clusters within PKS today that require the same data for test/dev/prod Pure Storage enables this nearly instantly. Pure Storage FlashArray Snapshots and Clones move at the speed of an API call from any of our SDK’s from Python to Powershell to Ansible to Terraform and more to give you an easy way to fit Pure Storage into your Infrastructure as Code tools. 

You can probably spend the next 5 hours reading blogs and papers of all the other benefits of Pure Storage and they all apply to your PKS on vSphere environment but I wanted to provide a few examples directly related to operating PKS on Pure.

VMworld 2019 Session

In my session for VMworld in San Francisco I will demonstrate how Pure Storage is able to instantly migrate persistent volumes from “other” k8s clusters to PKS. Make sure you make it to this session if you considering PKS.

PSO and “Failed to Log in to Any iSCSI Targets.”

So I create and destroy Kubernetes clusters on vSphere on a pretty regular basis. Some I create with Terraform and Ansible. Some I use PKS. I have a plumbing test for Pure Service Orchestrator that mounts a single volume to a pod on each node.

Every once in a while I get an error like this, on just one node:

Failed to log in to any iSCSI targets! Will not be able to attach volume

In order to make sure it isn’t PSO with the error and it shouldn’t be since the other nodes are working. Run this command:

iscsiadm -m discovery -t st -p 192.168.230.24
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.24,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.25,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.26,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.27,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 192.168.230.24:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.25:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.26:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.27:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479

Now I that isn’t what should be the result. So I thought at first to restart iscsi and that didn’t help. Then I thought, well this is a lab so lets just…

#cd /etc/iscsi
#rm -r nodes

Do not try this if you have other iSCSI targets for other storage. Not sure you will be happy. At first, I thought I should stop iSCSI before doing this. It doesn’t seem to have any effect. Now every node is able to mount and start the pod. Pure Service Orchestrator is trying to mount that volume over and over so it didn’t take long to see everything showing the way I wanted.

NAME                                        READY   STATUS    RESTARTS   AGE
 pure-flex-4zlcq                             1/1     Running   0          12m
 pure-flex-7stfb                             1/1     Running   0          12m
 pure-flex-g2kt2                             1/1     Running   0          12m
 pure-flex-jg5cz                             1/1     Running   0          12m
 pure-flex-n8wkw                             1/1     Running   0          6m34s
 pure-flex-rtsv7                             1/1     Running   0          12m
 pure-flex-vtph2                             1/1     Running   0          12m
 pure-flex-w8x22                             1/1     Running   0          12m
 pure-flex-wqr9k                             1/1     Running   0          12m
 pure-flex-xwbww                             1/1     Running   0          12m
 pure-provisioner-9c8dc9f79-xrq6d            1/1     Running   1          12m
 redis-master-demolocal-1-779f74876c-9k24t    1/1     Running   0          12m
 redis-master-demolocal-10-6695b56f47-zgqc7   1/1     Running   0          12m
 redis-master-demolocal-2-778666b57-5xdh8     1/1     Running   0          6m3s
 redis-master-demolocal-3-84848dfb87-fhj6n    1/1     Running   0          12m
 redis-master-demolocal-4-7c9dfdffb9-6cjv5    1/1     Running   0          12m
 redis-master-demolocal-5-65b555fc79-jjdkl    1/1     Running   0          12m
 redis-master-demolocal-6-6d495bfdf-cb5r2     1/1     Running   0          12m
 redis-master-demolocal-7-5c5db655-fx2qd      1/1     Running   0          12m
 redis-master-demolocal-8-74bc65b8d9-2bt8h    1/1     Running   0          12m
 redis-master-demolocal-9-65dd54c587-zb9p2    1/1     Running   0          12m

Thanks, @CodyHosterman. I am Incorrigible.

When Mr. Top10 vBlogger mentions you and your VMworld Session. It is appropriate to always say thank you. If you are interested in what is going on with Pure Storage at VMworld be sure to read through Cody’s post to see all of our sessions. I will have some demos in the booth of Kubernetes on VMware vSphere with PKS (and more). So please be sure to come by and check them out.

Unsure what Cody means…

Get going with MicroK8s

Last week I was getting stickers from the Ubuntu booth during the Open Infrastructure Conference in Denver. I asked a sorta dumb question, since this was a so new to me. My very first Open Infra Conference (formerly OpenStack Summit). I was asking a lot of questions.

I saw a sticker for MicroK8s (Micro-KATES).

Me: What is that?

Person in Booth: Do you know what MiniKube is?

Me: Yes.

Person in Booth: It is like that, but from the Ubuntu Opinionated version.

Me: Ok, cool, my whole lab is Ubuntu, except when it isn’t. So I’ll try it out.

Ten minutes later? Kuberenetes is running on my Ubuntu 16.04 VM.

Go over to https://microk8s.io/ to get the full docs.

Want a quick lab?

snap install microk8s --classic
microk8s.kubectl get nodes
microk8s.kubectl get services

Done. What? What!

So this was slightly annoying to me to type microk8s.blah for everyhing. So alias that if you don’t already have kubectl. I didn’t, this was a fresh VM.

snap alias microk8s.kubectl kubectl

You can run this command to push the config into a file to be used elsewhere.

microk8s.kubectl config view --raw > $HOME/.kube/config

Want the Dashboard? Run this:

microk8s.enable dns dashboard

It took my 5 minutes to get to this point. Now I am like OK lets connect to some Pure FlashArrays.

First we need enable priveleged containers in MicroK8s. Add this line to the following 2 config files.

–allow-privileged=true

# kubelet config
sudo vim /var/snap/microk8s/current/args/kubelet
#kube-apiserver config
sudo vim /var/snap/microk8s/current/args/kube-apiserver

Restart services to pick up the new config:

sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service

Now you can install helm, and run the Pure Service Orchestrator Helm chart.

More info on that here:

https://github.com/purestorage/helm-charts

The sticker joined my laptop.

Namespace Issues when Removing CRD/Operators

With the latest release of Pure Service Orchestrator, we added support for a non-Helm installation for environments that do not allow Helm. This new method uses an Operator to setup and install PSO. The result is the same exact functionality but uses a security model more agreeable to some K8s distro vendors.

I do live demos of PSO a handful of times a day. Even though I use Terraform and Ansible to automate the creation of my lab K8s clusters I don’t want to do this many times a day. I usually just tear down PSO and leave my cluster ready for the next demo.

Removing the CRD and the Namespace created when installing the Operator has a couple of issues. One small issue is the Operator method creates a new namespace “pso-operator”. This is the default name, and you can choose your own namespace name during install time. I often choose “pso” for simplicity. As we have discovered, deleting a namespace that had a CRD installed into hangs in the status “Terminating”, for like, forever. FOR-EV-ER. This seems to be an issue dating back quite a ways in K8s land.

https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-448120772

via GIPHY

From a couple of GitHub issues and the help of Simon “I don’t do the twitter” Dodsley This is the process for deleting the CRD first and the Namespace. This method keeps the namespace form hanging in the state “Terminating”.

# Removing the pso-operator
kubectl delete all --all -n pso-operator

# If you haven't don't it already don't delete the namespace yet.
kubectl get ns
NAME          STATUS   AGE
default       Active   2d21h
kube-public   Active   2d21h
kube-system   Active   2d21h
pso-operator  Active   14h

kubectl get crd
NAME                         CREATED AT
psoplugins.purestorage.com   2019-04-17T01:37:31Z

# ok so...
kubectl delete crd psoplugins.purestorage.com
customresourcedefinition.apiextensions.k8s.io "psoplugins.purestorage.com" deleted

# does it hang? yeah it does
^C
# stuck terminating? 
kubectl describe crd psoplugins.purestorage.com
# snipping non-relevant output
...
Conditions:
    Last Transition Time:  2019-04-17T01:37:31Z
    Message:               no conflicts found
    Reason:                NoConflicts
    Status:                True
    Type:                  NamesAccepted
    Last Transition Time:  <nil>
    Message:               the initial names have been accepted
    Reason:                InitialNamesAccepted
    Status:                True
    Type:                  Established
    Last Transition Time:  2019-04-18T13:54:36Z
    Message:               CustomResource deletion is in progress
    Reason:                InstanceDeletionInProgress
    Status:                True
    Type:                  Terminating
  Stored Versions:
    v1

# Run this command to allow it to delete
kubectl patch crd/psoplugins.purestorage.com -p '{"metadata":{"finalizers":[]}}' --type=merge
customresourcedefinition.apiextensions.k8s.io/psoplugins.purestorage.com patched

# Re-run the crd delete
kubectl delete crd psoplugins.purestorage.com

# Confirm it is gone
kubectl get crd
No resources found.

# Remove the Namespace
kubectl delete ns pso-operator
namespace "pso-operator" deleted

#Verify removal
kubectl get ns
NAME          STATUS   AGE
default       Active   2d21h
kube-public   Active   2d21h
kube-system   Active   2d21h

If you sort of ignored my warning above and tried to remove the namespace BEFORE successfully removing the CRD follow the following procedure.

Namespace Removal

# Find that pesky 'Terminating' namespace
kubectl get ns
NAME           STATUS        AGE
default        Active        2d20h
kube-public    Active        2d20h
kube-system    Active        2d20h
pso            Active        13h
pso-operator   Terminating   35h

kubectl cluster-info
# run the kube-proxy
kubectl proxy &

# output the namespace to json
kubectl get namespace pso-operator -o json >tmp.json

# Edit the tmp.json to remove the finalizer the spec: should look like this:
"spec": {
        "finalizers": [
        ]
    },

# Now send that tmp.json to the API server
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/pso-operator/finalize

# Check your namespaces
kubectl get ns
NAME          STATUS   AGE
default       Active   2d20h
kube-public   Active   2d20h
kube-system   Active   2d20h
pso           Active   13h

# disable the kube-proxy, bring it back to the foreground and ctrl-C

fg
^C

What’s New in Pure Service Orchestrator?

This week (April 16, 2019), Pure released the 2.4.0 version of the Pure Service Orchestator for Kubernetes. This inlcuded: (from the release notes)

  • PSO Operator is now the preferred install method for PSO on OpenShift 3.11 and higher versions.
    The PSO Operator packages and deploys the Pure Service Orchestrator (PSO) on OpenShift for dynamic provisioning of persistent volumes on FlashArrays and FlashBlades. The minimum supported version is OpenShift 3.11.
    This Operator is created as a Custom Resource Definition from the pure-k8s-plugin Helm chart using the Operator-SDK.
    This installation process does not require Helm installation.
  • Added flasharray.iSCSILoginTimeout parameter with default value of 20sec.
  • Added flasharray.iSCSIAllowedCIDR parameter to list CIDR blocks allowed as iSCSI targets. The default value allows all addresses.
  • flexPath config parameter location in values.yaml has been moved from version 2.2.1 from under orchestrator field. Upgrading from version earlier than 2.3.0, needs change to values.yaml to use the new location of flexPath for PSO to work.

Some Highlights

The Operator is a big change for the install process. We are not leaving or abandoning Helm. I love Helm. Really. This was for our customers that do not allow Helm to run in their environments. Mainly the Tiller pod ran with more permissions than many security teams were comfortable with. Tillerless Helm is coming if you are worried now. The Operator will be the peferred method for RedHat OpenShift 3.11 and higher.

The flexPath: changing places in the values.yaml is good to know. We wanted to make that setting a top level setting and seperate it from being nested too far down. While we are sitll on the FlexVolume driver this is important. The newest values.yaml in the Helm chart even has several examples of paths depending on your distro of K8s. This becomes a non-issue with the CSI plugin we are working on. (Hooray!)

Last but not least, the iSCSIAllowedCIDR limits the iSCSI targets PSO will have the worker node log into during the Persistent Volume mount process. This is important to environments that may serve many different clusters with their own iSCSI networks. The iSCSI interfaces on a FlashArray can be divided with VLANS, but with this the traditional way to acquire target ip’s results in a long list of addresses to attempt to login. The iSCSIAllowedCIDR setting helps PSO know what subnet your cluster should try to mount and log into. The result is faster mounting and less noise around timeouts for networks your cluster might not be able to reach.

North Georgia Mountains

It is “NFSEndPoint”

I think I have updated my blog post and PSO guide to reflect this change. In case you are using Pure Service Orchestrator with FlashBlade. The original yaml for the arrays when installing PSO was “NfsEndPoint”. At somepoint, it was fixed to expect “NFSEndPoint” matching the proper name for NFS. I never updated my blog and docs until now.

Sample values.yaml

arrays:
  FlashArrays:
    - MgmtEndPoint: "1.2.3.4"
      APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb"
      Labels:
        rack: "22"
        env: "prod"
    - MgmtEndPoint: "1.2.3.5"
      APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb"
  FlashBlades:
    - MgmtEndPoint: "1.2.3.6"
      APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.7"
      Labels:
        rack: "7b"
        env: "dev"
    - MgmtEndPoint: "1.2.3.8"
      APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.9"
      Labels:
        rack: "6a"

Another Kickoff and a New Year

November 2018 was my the finish of my 5th year at Pure. I really meant to write up a recap but let’s just say November and December were super busy.

Cotton House Hotel in Barcelona

I was in Barcelona for VMworld EMEA the beginning of November, then came home to visit more customers around the US and tell them about using PSO with Kubernetes and Docker. Then my amazing oldest daughter had a soccer tournament in Orlando, FL. It was a great time with the family and why I do what I do.

Post Tournament Team pic. Go AFU U13 Girls.
Disney with the Family

Then back out to AWS re:Invent. This was Pure’s first big presence since we launched our suite of cloud data services the week before. It was great to share what we have been working on in the background for the last year. Cloud Block Store, CloudSnap and StorReduce have definitely increased the interest in doing a hybrid cloud, many current and prospective customers are very excited. I came home to take a breather and then off to KubeCon Seattle where our team was overwhelmed with conversations about how Pure can make Cloud Native apps persistent with easy Storage as a Service with Pure Service Orchestrator. Being able to run the same API’s in the Public Cloud and on prem is very appealing to teams rolling out apps in all kinds of use cases. Dev in the cloud and prod on prem? yes. Dev on prem and prod in the cloud? yes. Dev and Prod in the cloud? you guessed it. yes.

The Pure team at KubeCon Seattle

January was about building out some content for our sales and company kickoff but also helping customers with their projects on K8s and Docker. That brings me to yet another Kickoff. What I call the Orangest show on Earth. A chance for me to see so many great friends and see how successful their last year was. It was very satisfying to see sales reps and SE’s that I worked with throughout the year get recognized for growth they brought to the company. It was very nice to be recognized by my leadership and peers with an award. When you work with such a wide range of regions and teams sometimes gets hard to see if you are making a difference, especially when you are remote like I am. At the beginning of 2018, almost no one at Pure knew what I was working on. Slowly but surely the excitement around K8s is growing, so I am looking forward to an even more exciting year here at Pure.

Kingsman jackets for the team. So much orange and such a great team.

Somethings I would like to do in 2019

  • Share more on the blog. The transition from VMware(I still do VMware stuff!) to Kubernetes has provided many learning opportunities for me to share.
  • Work on Clusters as Cattle with Persistent data. Data is important and the app/cluster can or should move around it. Seamlessly.
  • Finish some cloud/dev online classes I have started. Finding time with no distractions is key here.
In Seattle, Pizza and Star Wars / Ron Swanson art? YES!

New Pure Service Orchestrator Demo

You may want to make this full screen to see all the CLI glory.

What you will see in this demo is the initial install of Pure Service Orchestrator on a upstream version of Kubernetes. Then by running the ‘helm upgrade’ command I can add a FlashArray to scale the environment and take advantage of Smart Provisioning. First we see the new m50 is not used over the original m70. So the final upgrade adds labels for the failure domain or availability zone in Kubernetes. I also add my FlashBlade to enable block and file if needed for my workload. We use the sample application with node and storage selectors to now request the app use compute and storage in a particular AZ. Kubernetes will only schedule the compute on matching nodes and PSO will provision storage on matching storage arrays.

I would love to hear what you think of this and any other ways I can show this off to enable cloud native applications. I am always looking for good examples of containerized apps that need persistent storage. Hit me up on the twitters @jon_2vcps or submit a comment below.

Kubecon 2018 Seattle Pure Storage – also We are hiring

I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.

It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.

  • Send me a message on twitter @jon_2vcps
  • Find me at the Pure Booth
  • Stop me in the hall between sessions.

I look just like one of the following people: