Site icon CrashLoopBackoff

Managing Multiple Kubernetes Clusters

There was a question on twitter and I thought I would write down my process for others to learn from. First, a little background. Kubernetes is managed mostly using a tool called kubectl (kube-control, kube-cuddle, kube-C-T-L, whatever). This tool will look for the configuration to talk to the API for kubernetes management. A sanitized sample can be seen by running:

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.140:6443
  name: k8s-dev-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.130:6443
  name: k8s-lab-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.150:6443
  name: k8s-prod-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.21.142.160:6443
  name: k8s-ubt18
contexts:
- context:
    cluster: k8s-ubt18
    user: I-AM-GROOT
  name: I-AM-GROOT@k8s-ubt18
- context:
    cluster: k8s-dev-1
    user: k8s-dev-1-admin
  name: k8s-dev-1-admin@k8s-dev-1
- context:
    cluster: k8s-lab-1
    user: k8s-lab-1-admin
  name: k8s-lab-1-admin@k8s-lab-1
- context:
    cluster: k8s-prod-1
    user: k8s-prod-1-admin
  name: k8s-prod-1-admin@k8s-prod-1
current-context: I-AM-GROOT@k8s-ubt18
kind: Config
preferences: {}
users:
- name: I-AM-GROOT
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-dev-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-lab-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s-prod-1-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

You can see there is Clusters, Contexts and Users. The following commands kubectl config get-context and use-context allow you to see and switch contexts. In my use case I have a single context per cluster.

kubectl config get-context
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         I-AM-GROOT@k8s-ubt18          k8s-ubt18    I-AM-GROOT         
          k8s-dev-1-admin@k8s-dev-1     k8s-dev-1    k8s-dev-1-admin    
          k8s-lab-1-admin@k8s-lab-1     k8s-lab-1    k8s-lab-1-admin    
          k8s-prod-1-admin@k8s-prod-1   k8s-prod-1   k8s-prod-1-admin
kubectl config use-context k8s-dev-1-admin@k8s-dev-1
Switched to context "k8s-dev-1-admin@k8s-dev-1".

Switching this way became cumbersome. So I now use a tool called kubectx and with it kubens. https://github.com/ahmetb/kubectx. Now you can see below my prompt shows my cluster + the namespace. Pretty sweet to see that and has saved me from removing deployments from the wrong cluster. “k8s-dev-1-admin@k8s-dev-1:default”

(base) (⎈ |k8s-dev-1-admin@k8s-dev-1:default)owings@owings--MacBookPro15:~/Dropbox/gitproj/cattle-clusters$ 

Now the kubectl tool will look in your environment for a variable KUBECONFIG. Many times this will be set to KUBECONFIG=~/.kube/config . If you modify your .bash_profile on OSX or .bashrc in Ubuntu(and others) you can point that variable anywhere. I formerly had this pointed to a single file for each cluster. For example:

KUBECONFIG=~/.kube/config.prod:~/.kube/config.dev:~/.kube/config.lab

This worked great but a few 3rd party management tools had issues switching between multiple files. At least for me the big one was the kubernetes module for python. So I moved to doing a single combined config file at ~/.kube/config

Now what do I do now?

so many configs

Here is my basic workflow. I don’t automate it yet as I don’t want to overwrite something carelessly.
1. Run an ansible playbook that grabs the admin.conf file from /etc/kubernetes on the masters of the cluster.
2. Modify manually the KUBECONFIG environment variable to be KUBECONFIG=~/.kube/config:~/latestconfig/new.config
3. Run kubectl config view –raw to make sure it is all there the –raw tag unhides the keys and such.
4. COPY the ~/.kube/config to ~/.kube/config.something
5. Run kubectl config view –raw > ~/.kube/config
6. Open a new terminal to use my original env variable for KUBECONFIG and make sure all the clusters show up.
7. Clean up old config if I am feeling extra clean.

Example on my Ubuntu clusters:

ansible-playbook -i inventory.ini -b -v get-me-some-key.yml -u ubuntu
KUBECONFIG=~/.kube/config:~/latestconfig/config.prod01
kubectl config view --raw
cp ~/.kube/config ~/.kube/config.10.02.2019
kubectl config view --raw > ~/.kube/config

#IN New Window
kubectl config view
kubectl get nodes
kubectl get pods

#Using kubectx showing output
kubectx
I-AM-GROOT@k8s-ubt18
k8s-dev-1-admin@k8s-dev-1
k8s-lab-1-admin@k8s-lab-1
k8s-prod-1-admin@k8s-prod-1

kubectx I-AM-GROOT@k8s-ubt18
Switched to context "I-AM-GROOT@k8s-ubt18".

Not really hard or too complicated. I destroy clusters pretty often so sometimes I will blow away the config and then remerge my current clusters into a new config file.

https://giphy.com/gifs/guardians-of-the-galaxy-groot-baby-xUySTXEp8EhA9PG8j6
Exit mobile version