Kubernetes Python Clients – 1.2.2

I just created the Python Kubernetes Client for v1.2.2.

I’ve also added some additional information on how to gen your own client if you need/want to.

https://github.com/mward29/python-k8sclient-1-2-2

 

**Update

Created AutoScaling and new beta extensions client

https://github.com/mward29/python-k8sclient-autoscaling-v1

https://github.com/mward29/python-k8sclient-v1beta1-v1.2.2

Enjoy!

Kubernetes – Scheduling and Multiple Availability Zones

The Kubernetes Scheduler is a very important part of the overall platform but its functionality and capabilities are not widely known. Why? because for the most part the scheduler just runs out of the box with little to no additional configuration.

So what does this thing do? It determines what server in the cluster a new pod should run on. Pretty simple yet oh so complex. The scheduler has to very quickly answer questions like-

How much resource (memory, cpu, disk) is this pod going to require?

What workers (minions) in the cluster have the resources available to manage this pod?

Are there external ports associated with this pod? If so, what hosts may already be utilizing that port?

Does the pod config have nodeSelector set? If so, which of the workers have a label fitting this requirement?

Has a weight been added to a given policy?

What affinity rules are in place for this pod?

What Anti-affinity rules does this pod apply to?

All of these questions and more are answered through two concepts within the scheduler. Predicates and Priority functions.

Predicates – as the name suggests, predicates set the foundation or base for selecting a given host for a pod.

Priority functions – Assign a number between 0 and 10 with 0 being worst fit and 10 being best.

These two concepts combined determine where a given pod will be hosted in the cluster.

 

Ok so lets look at the default configuration as of Kubernetes 1.2.

{
	"kind" : "Policy",
	"version" : "v1",
	"predicates" : [
		{"name" : "PodFitsPorts"},
		{"name" : "PodFitsResources"},
		{"name" : "NoDiskConflict"},
		{"name" : "MatchNodeSelector"},
		{"name" : "HostName"}
	],
	"priorities" : [
		{"name" : "LeastRequestedPriority", "weight" : 1},
		{"name" : "BalancedResourceAllocation", "weight" : 1},
		{"name" : "ServiceSpreadingPriority", "weight" : 1}
	]
}

 

The predicates listed perform the following actions: I think they are fairly obvious but I’m going to list their function for posterity.

{“name” : “PodFitsPorts”} – Makes sure the pod doesn’t require ports that are already taken on hosts

{“name” : “PodFitsResources”} – Ensure CPU and Memory are available on the host for the given pod

{“name” : “NoDiskConflict”} – Makes sure if the Pod has Local disk requirements that the Host can fulfill it

{“name” : “MatchNodeSelector”} – If nodeSelector is set, determine which nodes match

{“name” : “HostName”} – A Pod can be added to a specific host through the hostname

 

Priority Functions: These get a little bit interesting.

{“name” : “LeastRequestedPriority”, “weight” : 1} – Calculates percentage of expected resource consumption based on what the POD requested.

{“name” : “BalancedResourceAllocation”, “weight” : 1} – Calculates actual consumed resources and determines best fit on this calc.

{“name” : “ServiceSpreadingPriority”, “weight” : 1} – Minimizes the number of pods belonging to the same service from living on the same host.

 

So here is where things start to get really cool with the Scheduler. With v1.2, Kubernetes has it built-in to spread Pods across multiple Zones (Availability Zones in AWS). This works for both GCE and AWS. We run in AWS so I’m going to show the config for that here. Setup accordingly for GCE.

All you have to do in AWS is label your workers(minions) properly and Kubernetes will handle the rest. It is a very specific label you must use. Now I will say, we added a little weight to ServiceSpreadingPriority to make sure Kubernetes gave more priority to spreading pods across AZs.

kubectl label nodes <server_name> failure-domain.beta.kubernetes.io/region=$REGION
kubectl label nodes <server_name> failure-domain.beta.kubernetes.io/zone=$AVAIL_ZONE

You’ll notice the label looks funny. ‘failure-domain’ made a number of my Ops colleagues cringe when they saw it for the first time prior to understanding its meaning. One of them happened to be looking at our newly created cluster and thought we already had an outage. My Bad!

You will notice $REGION and $AVAIL_ZONE are variables we set.

The $REGION we define in Terraform during cluster build but it looks like any typical AWS region.

REGION="us-west-2"

The availability zone we derive on the fly by having our EC2 instances query the AWS API via curl. The IP address is a globally usable IP for all EC2 instances. So you can literally copy this command and use it.

AVAIL_ZONE=`curl http://169.254.169.254/latest/meta-data/placement/availability-zone`

 

IMPORTANT NOTE: If you create a customer policy for the Scheduler, you MUST include everything in it you want. The DEFAULT policies will not exist if you don’t place them in the config. Here is our policy.

{
	"kind" : "Policy",
	"version" : "v1",
	"predicates" : [
		{"name" : "PodFitsPorts"},
		{"name" : "PodFitsResources"},
		{"name" : "NoDiskConflict"},
		{"name" : "MatchNodeSelector"},
		{"name" : "HostName"}
	],
	"priorities" : [
		{"name" : "ServiceSpreadingPriority", "weight" : 2},
		{"name" : "LeastRequestedPriority", "weight" : 1},
		{"name" : "BalancedResourceAllocation", "weight" : 1}
	]
}

 

And within the kube-scheduler.yaml config we have:

- --policy-config-file="/path/to/customscheduler.json"

 

Alright, if that wasn’t enough. You can write your own schedulers within Kubernetes. Personally I’ve not had to do this but here is a link that can provide more information if you are interested.

 

And if you need more depth around Kubernetes Scheduling the best article I’ve seen written on it is at OpenShift. You can find more information around Affinity/Anti-Affinity, Configurable Predicates and Configurable Priority functions.

Kubernetes – Jobs

Ever want to run a recurring cronjob in Kubernetes? Maybe you want to recursively pull an AWS S3 bucket or gather data by inspecting your cluster. How about running some analytics in parallel or even running a series of tests to make sure the new deploy of your cluster was successful?

A Kubernetes Job might just be the answer.

So what exactly is a job anyway? Basically its a short lived replication controller. A job ensures that a task is successfully implemented even when faults in the infrastructure would otherwise cause it to fail. Consider it the fault tolerant way of executing a one-time pod/request. Or better yet cron with some brains. Oh and speaking of which, you’ll actually be able to run Jobs at specific times and dates here pretty soon in Kubernetes 1.3.

For example:

I have a Cassandra cluster in Kubernetes and I want to run:

nodetool repair -pr -h <host_ip>

on every node in my 10 node Cassandra cluster. And because I’m smart I’m going to run 10 different jobs, one at a time so I don’t overload my cluster during the repair.

Here be a yaml for you:

apiVersion: batch/v1
kind: Job
metadata:
  name: nodetool
spec:
  template:
    metadata:
      name: nodetool
    spec:
      containers:
      - name: nodetool
        image: some_private_repo:8500/nodetool
        command: ["/usr/bin/nodetool",  "repair", "-h", "$(cassandra_host_ip)"]
      restartPolicy: Never

A Kubernetes Job will ensure that each job runs through to successful completion. Pretty cool huh? Now mind you, its not smart. Its not checking to see if nodetool repair was successful. It simply looking to see if the pod exited successfully.

Another key point about Jobs is they don’t just go away after they run. Because you may want to check on the logs or status of the job or something. (Not that anyone would ever be smart and push that information to a log aggregation service). Thus its important to remember to run a Job to clean up your jobs? Yep. Do it. Just setup a Job to keep things tidy. Odd, I know, but it works.

kubectl delete jobs/nodetool

Now lets imagine I’m a bit sadistic and I want to run all my ‘nodetool repair’ jobs in parallel. Well that can be done too. Aaaannnnd lets imagine that I have a list of all the Cassandra nodes I want to repair in a queue somewhere.

I could execute the nodetool repair job and simply scale up the number of replicas. As long as the pod can pull the last Cassandra host from the queue, I could literally run multiple repairs in parallel. Now my Cassandra cluster might not like that much and I may or may not have done something like this before but…..well…we’ll just leave that alone.

kubectl scale --replicas=10 jobs/nodetoolrepair

There is a lot more to jobs than just this but it should give you an idea of what can be done. If you find yourself in a mire of complexity trying to figure out how to run some complex job, head back to the source. Kubernetes Jobs. I think I reread this link 5 times before I groked all of it. Ok, maybe it was 10. or so. Oh fine, I still don’t get it all.

To see jobs that are hanging around-

kubectl get pods -a

 

@devoperandi

Vault in Kubernetes – Take 2

A while back I wrote about how we use Vault in Kubernetes and recently a good samaritan brought it to my attention that so much has changed with our implementation that I should update/rewrite a post about our current setup.

Again congrats to Martin Devlin for all the effort he has put in. Amazing engineer.

So here goes. Please keep in mind, I’ve intentionally abstracted various things out of these files. You won’t be able to copy and paste to stand up your own. This is meant to provide insight into how you could go about it.

If it has ###SOMETHING### its been abstracted.

If it has %%something%%, we use another script that replaces those for real values. This will be far less necessary in Kubernetes 1.3 when we can begin using variables in config files. NICE!

Also understand, I am not providing all of the components we use to populate policies, create tokens, initialize Vault, load secrets etc etc. Those are things I’m not comfortable providing at this time.

Here is our most recent Dockerfile for Vault:

FROM alpine:3.2
MAINTAINER 	Martin Devlin <martin.devlin@pearson.com>

ENV VAULT_VERSION    0.5.2
ENV VAULT_HTTP_PORT  ###SOME_HIGH_PORT_HTTP###
ENV VAULT_HTTPS_PORT ###SOME_HIGH_PORT_HTTPS###

COPY config.json /etc/vault/config.json

RUN apk --update add openssl zip\
&& mkdir -p /etc/vault/ssl \
&& wget http://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip \
&& unzip vault_${VAULT_VERSION}_linux_amd64.zip \
&& mv vault /usr/local/bin/ \
&& rm -f vault_${VAULT_VERSION}_linux_amd64.zip

EXPOSE ${VAULT_HTTP_PORT}
EXPOSE ${VAULT_HTTPS_PORT}

COPY /run.sh /usr/bin/run.sh
RUN chmod +x /usr/bin/run.sh

ENTRYPOINT ["/usr/bin/run.sh"]
CMD []

Same basic docker image build on Alpine. Not too much has changed here other than some ports, version of Vault and we have added a config.json so we can dynamically create the consul backend and set our listeners.

Lets have a look at config.json

### Vault config

backend "consul" {
  address = "%%CONSUL_HOST%%:%%CONSUL_PORT%%"
  path = "vault"
  advertise_addr = "https://%%VAULT_IP%%:%%VAULT_HTTPS_PORT%%"
  scheme = "%%CONSUL_SCHEME%%"
  token = %%CONSUL_TOKEN%%
  tls_skip_verify = 1
}

listener "tcp" {
  address = "%%VAULT_IP%%:%%VAULT_HTTPS_PORT%%"
  tls_key_file = "/###path_to_key##/some_vault.key"
  tls_cert_file = "/###path_to_crt###/some_vault.crt"
}

listener "tcp" {
  address = "%%VAULT_IP%%:%%VAULT_HTTP_PORT%%"
  tls_disable = 1
}

disable_mlock = true

We dynamically configure config.json with

CONSUL_HOST = Kubernetes Consul Service IP

CONSUL_PORT = Kubernetes Consul Service Port

CONSUL_SCHEME = HTTPS OR HTTP for connection to Consul

CONSUL_TOKEN = ACL TOKEN to access Consul

VAULT_IP = VAULT_IP

VAULT_HTTPS_PORT = Vault HTTPS Port

VAULT_HTTP_PORT = Vault HTTP Port

 

run.sh has changed significantly however. We’ve added ssl support and cleaned things up a bit. We are working on another project to transport the keys external to the cluster but for now this is a manual process after everything is stood up. Our intent moving forward is to store this information in what we call ‘the brain’ and provide access to each key to different people. Maybe sometime in the next few months I can talk more about that.

#!/bin/sh
if [ -z ${VAULT_HTTP_PORT} ]; then
  export VAULT_HTTP_PORT=###SOME_HIGH_PORT_HTTP###
fi
if [ -z ${VAULT_HTTPS_PORT} ]; then
  export VAULT_HTTPS_PORT=###SOME_HIGH_PORT_HTTPS###
fi

if [ -z ${CONSUL_SERVICE_HOST} ]; then
  export CONSUL_SERVICE_HOST="127.0.0.1"
fi

if [ -z ${CONSUL_SERVICE_PORT_HTTPS} ]; then
  export CONSUL_HTTP_PORT=SOME_CONSUL_PORT
else
  export CONSUL_HTTP_PORT=${CONSUL_SERVICE_PORT_HTTPS}
fi

if [ -z ${CONSUL_SCHEME} ]; then
  export CONSUL_SCHEME="https"
fi

if [ -z ${CONSUL_TOKEN} ]; then
  export CONSUL_TOKEN=""
else
  CONSUL_TOKEN=`echo ${CONSUL_TOKEN} | base64 -d`
fi

if [ ! -z "${VAULT_SSL_KEY}" ] &&  [ ! -z "${VAULT_SSL_CRT}" ]; then
  echo "${VAULT_SSL_KEY}" | sed -e 's/\"//g' | sed -e 's/^[ \t]*//g' | sed -e 's/[ \t]$//g' > /etc/vault/ssl/vault.key
  echo "${VAULT_SSL_CRT}" | sed -e 's/\"//g' | sed -e 's/^[ \t]*//g' | sed -e 's/[ \t]$//g' > /etc/vault/ssl/vault.crt
else
  openssl req -x509 -newkey rsa:2048 -nodes -keyout /etc/vault/ssl/vault.key -out /etc/vault/ssl/vault.crt -days 365 -subj "/CN=vault.kube-system.svc.cluster.local" 
fi

export VAULT_IP=`hostname -i`

sed -i "s,%%CONSUL_HOST%%,$CONSUL_SERVICE_HOST,"   /etc/vault/config.json
sed -i "s,%%CONSUL_PORT%%,$CONSUL_HTTP_PORT,"      /etc/vault/config.json
sed -i "s,%%CONSUL_SCHEME%%,$CONSUL_SCHEME,"       /etc/vault/config.json
sed -i "s,%%CONSUL_TOKEN%%,$CONSUL_TOKEN,"         /etc/vault/config.json
sed -i "s,%%VAULT_IP%%,$VAULT_IP,"                 /etc/vault/config.json
sed -i "s,%%VAULT_HTTP_PORT%%,$VAULT_HTTP_PORT,"   /etc/vault/config.json
sed -i "s,%%VAULT_HTTPS_PORT%%,$VAULT_HTTPS_PORT," /etc/vault/config.json

cmd="vault server -config=/etc/vault/config.json $@;"

if [ ! -z ${VAULT_DEBUG} ]; then
  ls -lR /etc/vault
  cat /###path_to_/vault.crt###
  cat /etc/vault/config.json
  echo "${cmd}"
  sed -i "s,INFO,DEBUG," /etc/vault/config.json
fi

## Master stuff

master() {

  vault server -config=/etc/vault/config.json $@ &

  if [ ! -f ###/path_to/something.txt### ]; then

    export VAULT_SKIP_VERIFY=true
    
    export VAULT_ADDR="https://${VAULT_IP}:${VAULT_HTTPS_PORT}"

    vault init -address=${VAULT_ADDR} > ###/path_to/something.txt####

    export VAULT_TOKEN=`grep 'Initial Root Token:' ###/path_to/something.txt### | awk '{print $NF}'`
    
    vault unseal `grep 'Key 1:' ###/path_to/something.txt### | awk '{print $NF}'`
    vault unseal `grep 'Key 2:' ###/path_to/something.txt### | awk '{print $NF}'`
    vault unseal `grep 'Key 3:' ###/path_to/something.txt### | awk '{print $NF}'`

  fi

}

case "$1" in
  master)           master $@;;
  *)                exec vault server -config=/etc/vault/config.json $@;;
esac

Alright now that we have our image, lets have a look at how we deploy it. Now that we have SSL in place and we’ve got some good ACLs we expose Vault external to the Cluster but still internal to our environment. This allows us to automatically populate Vault with secrets, keys and certs from various sources while still providing a high level of security.

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: vault
  namespace: kube-system
  labels:
    name: vault
spec:
  ports:
    - name: vaultport
      port: ###SOME_VAULT_PORT_HERE###
      protocol: TCP
      targetPort: ###SOME_VAULT_PORT_HERE###
    - name: vaultporthttp
      port: 8200
      protocol: TCP
      targetPort: 8200
  selector:
    app: vault

Ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: vault
  namespace: kube-system
  labels:
    ssl: "true"
spec:
  rules:
  - host: ###vault%%ENVIRONMENT%%.somedomain.com###
    http:
      paths:
      - backend:
          serviceName: vault
          servicePort: ###SOME_HIGH_PORT_HTTPS###
        path: /

 

replicationcontroller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: vault
  namespace: kube-system
spec:
  replicas: 3
  selector:
    app: vault
  template:
    metadata:
      labels:
        pool: vaultpool
        app: vault
    spec:
      containers:
        - name: vault
          image: '###BUILD_YOUR_IMAGE_AND_PUT_IT_HERE###'
          imagePullPolicy: Always
          env:
            - name: CONSUL_TOKEN
              valueFrom:
                secretKeyRef:
                  name: vault-mgmt
                  key: vault-mgmt
            - name: "VAULT_DEBUG"
              value: "false"
            - name: "VAULT_SSL_KEY"
              valueFrom:
                secretKeyRef:
                  name: ###MY_SSL_KEY###
                  key: ###key###
            - name: "VAULT_SSL_CRT"
              valueFrom:
                secretKeyRef:
                  name: ###MY_SSL_CRT###
                  key: ###CRT###
          readinessProbe:
            httpGet:
              path: /v1/sys/health
              port: 8200
            initialDelaySeconds: 10
            timeoutSeconds: 1
          ports:
            - containerPort: ###SOME_VAULT_HTTPS_PORT###
              name: vaultport
            - containerPort: 8200
              name: vaulthttpport
      nodeSelector:
        role: minion

WARNING: Add your volume mounts and such for the Kubernetes Secrets associated with the vault ssl crt and key.

 

As you can see, significant improvements made to how we build Vault in Kubernetes. I hope this helps in your own endeavors.

Feel free to reach out on Twitter or through the comments.