OpenID Connect – Enabling Your Team

Hello! My name is Matt Halder and I’ve had some interesting experiences working in a variety of IT fields. I started out working at a Government Contractor in Washington, D.C as a Networker Controller; moved my way up to Network Engineer and finished as a Lead Technologist. From there, headed westward to Denver, CO for an opportunity to work at Ping Identity as a Security Operations Engineer.  Currently, I work at FullContact as DevOps Engineer.  The FullContact team has been using kubernetes in production for the last seven months as a way to reduce our overall cloud hosting costs and move away from IaaS vendor lock-in.  Both the development and staging clusters were bootstrapped using kops.  The largest barrier to adoption that was echoed throughout the development team was needing the ability to tail logs.  When role-based access control was introduced in kubernetes 1.6, the ability to provide access to the cluster outside of shared tokens, certs, or credentials became a reality.  Here are the steps that were used to enable openid-connect on kubernetes.

When setting up an OpenID Connect provider, there are few terms to be aware of.  First is the “IdP”, which is the identity provider; many technologies can be used as an identity provider such as Active Directory, Free IPA, Okta, Dex or PingOne.  Second is the “SP”, which is the service provider; in this case the service provider is the kubernetes API.  The basic overview of an OpenID Connect workflow is this: the user authenticates to the IdP, the IdP returns a token to the user, this token is now valid for any SP that is configured to use the the IdP that produced the token.

  1. Set up your IdP with an openid-connect endpoint and acquire the credentials.
  2. Configure the SP [aka configure the API server] to accept openid-connect tokens and include a super-admin flag so that existing setup will continue to work throughout the change.
  3. Generate kubeconfig file including oidc user config.
  4. Create role bindings for users on the cluster.
  5. Ensure all currently deployed services have role bindings associated with them.

Step 1: Set up the IdP

Since G Suite is already in place, we had an IdP that could be used for the organization.  The added benefit is this IdP is pretty well documented and supported right out of the box, the caveat being that there is no support for groups so each user will need their own role binding on the cluster.

  • Navigate to https://console.developers.google.com/projectselector/apis/library.
  • From the drop-down create a new project.
  • The side bar, under APIs & services -> select Credential.
  • Select OAuth consent screen (middle tab in main view).  Select an email and choose a product name, press save.
  • This will take you back to the Credentials tab (same as the screenshot above).  Select OAuth clientID from the drop-down.
  • From application type -> select Other and give a unique name.
  • Copy the clientID and client secret or download the json.  Download is under OAuth 2.0 client IDs on the right most side.

Step 2: Configure the SP [aka configure API Server] to accept OIDC tokens

Kops now has the ability to add pre install and post install hooks for openid-connect. If we were starting from scratch, this is the route that would be explored.  However, adding these hooks didn’t trigger any updates and forcing a rolling update on a system running production traffic was too risky and was untested since staging had tested/updated prior to this functionality being introduced.

Kubelet loads core manifests from a local path, the kops clusters kubelet loads from /etc/kubernetes/manifests. This directory stores the kube-apiserver manifest file that tells kubelet how to deploy the api server as a pod.  Editing this file will trigger kubelet to re-deploy the API server with new configuration.  Note, this operation is much riskier on a single master cluster than on a multi master cluster.

  • Copy the original kube-apiserver.manifest.
  • Edit kube-apiserver.manifest adding these lines:
--authorization-mode=RBAC

--authorization-rbac-super-user=admin

--oidc-client-id=XXXXXX-XXXXXXXXXXX.apps.googlecontent.com

--oidc-issuer-url=https://accounts.google.com

--oidc-username-claim=email
  • Kubelet should re-deploy the API server within a couple of minutes of the manifest being edited.
  • Ensure that network overlays/CNI is functioning properly before proceeding, not all overlays shipped with service accounts and role bindings. This caused some issues with early adopters to kubernetes 1.6 (Personally, I had to generate a blank configmap for calico since it would fail if one wasn’t found).

Step 3: Generating a kubeconfig file

This process is broken into two steps, the first is to generate the cluster and context portion of the config while the second part is having the user acquire their openid-connect tokens and add them to the kubeconfig.

  • While opinions will vary, I’ve opted to skip the TLS verifications on the kubeconfig. Reasoning being is this would require a CA infrastructure  to generate certs per users which isn’t in place.
  • There’s a bit of a chicken and egg thing going on here where kubectl needs to be installed so that a kubeconfig can be generated for the kubectl (although that’s how ethereumwallet is installed so maybe it’s just me).  Either way, this script can be edited with correct context and endpoints to generate the first half of the kubeconfig:
#!/usr/bin/env bash

set -e

USER=$1

if [ -z "$USER" ]; then
  echo "usage: $0 <email-address>"
  exit 1
fi

echo "setting up cluster for user '$USER'"

# Install kubectl dependency
source $(dirname $(readlink -f "$0"))/install_kubectl.sh 1.6.8

# Set kubeconfig location the current users home directory
export KUBECONFIG=~/.kube/config

# Set cluster configs
kubectl config set-cluster cluster.justfortesting.org \
  --server=https://api.cluster.justfortesting.org \
  --insecure-skip-tls-verify=true

#Set kubeconfig context
kubectl config set-context cluster.justfortesting.org \
  --cluster=cluster.justfortesting.org \
  --user=$USER

kubectl config use-context cluster.justfortesting.org
  • To generate the second part of the kubeconfig, use k8s-oidc-helper from here to generate the user portion and append the output at the bottom of the config file.  Now, with a functioning kubeconfig the user needs a role binding present in the cluster to have access.  The IdP client-id and client-secret will need to be made available to users so they can generate the openid-connect tokens.  I’ve had good success with LastPass for this purpose.

Step 4: User Role Bindings

  • Now, create a default role that users can bind to. The example gives the ability to list pods and their logs from the default namespace.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: default
  name: developer-default-role
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list"]
  • Now, bind users to this role (notice the very last line has to be identical to the G Suite email address that used in Step 3).
  • At our organization, these files are generated by our team members and then approved via github pull request.  Once the PR has been merged into master, the role bindings become active on the clusters via jenkins job.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: ${USER}@organization.tld-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: developer-default-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: $USER@organization.tld

Step 5: Existing tooling needs an account and binding

The last step is necessary for any existing tooling in the cluster to ensure continued functionality.  The “–authorization-rbac-super-user=admin” flag from step 2 was added to ensure continuity throughout the process.  We use helm to deploy foundational charts into the cluster; helm uses a pod called “tiller” on the cluster to receive all specs from helm sdk and communicate them to the API, scheduler, and controller-manager.  For foundational tooling such as this, use service accounts and cluster role bindings.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller-cluster-rolebinding
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

 

 

Kubernetes Authentication – OpenID Connect

Authentication is often that last thing you decide to implement right before you go to production and you realize the security audit is going to block your staging or more likely production deploy. Its that thing that everyone recognizes as extremely important yet never manages to factor into the prototype/poc. Its the piece of the pie that could literally break a entire project with a single security incident but we somehow manage to accept Basic Authentication as ‘good enough’.

Now I’m not going to tell you I’m any different. In fact, its quite the opposite. What’s worse is I’ve got little to no excuse. I worked at Ping Identity for crying out loud. After as many incidents as I’ve heard of happening without good security, you would think I’d learn my lesson by now. But no, I put it off for quite some time in Kubernetes, accepting Basic Authentication to secure our future. That is, until now.

Caveat: There is a fair amount of complexity so if you find I’ve missed something important. PLEASE let me know in the comments so others can benefit.


Currently there are 4 Authentication methods that can be used in Kubernetes. Notice I did NOT say Authorization methods. Here is a very quick summary.

  • Client Certificate Authentication – Fairly static even though multiple certificate authorities can be used. This would require a new client cert to be generated per user.
  • Token File Authentication – Static in nature. Tokens all stored in a file on the host. No TTL. List of Tokens can only be changed by modifying the file and restarting the api server.
  • Basic Authentication – Need I say more? very similar to htpasswd.
  • OpenID Connect Authentication – The only solution with the possibility of being SSO based and allowing for dynamic user management.

Authentication within Kubernetes is still very much in its infancy and there is a ton to do in this space but with OpenID Connect, we can create an acceptable solution with other OpenSource tools.

One of those solutions is a combination of mod_auth_openidc and Keycloak.

mod_auth_openidc – an authentication/authorization module for Apache 2.x created by Ping Identity.

Keycloak – Integrated SSO and IDM for browser apps and RESTful web services.

Now to be clear, if you were to be running OpenShift (RedHat’s spin on Kubernetes), this process would be a bit simpler as Keycloak was recently acquired by Red Hat and they have placed a lot of effort into integrating the two.


The remainder of this blog assumes no OpenShift is in play and we are running vanilla Kubernetes 1.2.2+

The high level-

Apache server

  1. mod_auth_openidc installed on apache server from here
  2. mod_proxy loaded
  3. mod_ssl loaded
  4. ‘USE_X_FORWARDED_HOST = True’ is added to /usr/lib/python2.7/site-packages/cloudinit/settings.py if using Python 2.7ish

Kubernetes API server

  1. configure Kubernetes for OpenID Connect

Keycloak

  1. Setup really basic realm for Kubernetes

 

KeyCloak Configuration:

This walk-through assumes you have a Keycloak server created already.

For information on deploying a Keycloak server, their documentation can be found here.

First lets add a realm called “Demo”

Screen Shot 2016-06-10 at 5.51.41 PM

Now lets create a Client “Kubernetes”

Screen Shot 2016-06-10 at 5.52.49 PM

Screen Shot 2016-06-10 at 5.54.20 PM

Notice in the image above the “Valid Redirect URIs” must be the

Apache_domain_URL + /redirect_uri

provided you are using my templates or the docker image I’ve created.

 

Now within the Kubernetes Client lets create a role called “user”

Screen Shot 2016-06-10 at 5.58.32 PM

 

And finally for testing, lets create a user in Keycloak.

Screen Shot 2016-06-10 at 6.00.19 PM

Notice how I have set the email when creating the user.

This is because I’ve set email in the oidc user claim in Kubernetes

- --oidc-username-claim=email

AND the following in the Apache server.

OIDCRemoteUserClaim email
OIDCScope "openid email"

If you should choose to allow users to register with Keycloak I highly recommend you make email *required* if using this blog as a resource.

 

Apache Configuration:

First lets configure our Apache server or better yet just spin up a docker container.

To spin up a separate server do the following:

  1. mod_auth_openidc installed on apache server from here
  2. mod_proxy loaded
  3. mod_ssl loaded
  4. ‘USE_X_FORWARDED_HOST = True’ is added to /usr/lib/python2.7/site-packages/cloudinit/settings.py if using Python 2.7ish
  5. Configure auth_openidc.conf and place it at /etc/httpd/conf.d/auth_openidc.conf in on centos.
    1. Reference the Readme here for config values.

To spin up a container:

Run a docker container with environment variables set. This Readme briefly explains each environment var. And the following template can be copied from here.

<VirtualHost _default_:443>
   SSLEngine on
   SSLProxyEngine on
   SSLProxyVerify ${SSLPROXYVERIFY}
   SSLProxyCheckPeerCN ${SSLPROXYCHECKPEERCN}
   SSLProxyCheckPeerName ${SSLPROXYCHECKPEERNAME}
   SSLProxyCheckPeerExpire ${SSLPROXYCHECKPEEREXPIRE}
   SSLProxyMachineCertificateFile ${SSLPROXYMACHINECERT}
   SSLCertificateFile ${SSLCERT}
   SSLCertificateKeyFile ${SSLKEY}

  OIDCProviderMetadataURL ${OIDCPROVIDERMETADATAURL}

  OIDCClientID ${OIDCCLIENTID}
  OIDCClientSecret ${OIDCCLIENTSECRET}

  OIDCCryptoPassphrase ${OIDCCRYPTOPASSPHRASE}
  OIDCScrubRequestHeaders ${OIDCSCRUBREQUESTHEADERS}
  OIDCRemoteUserClaim email
  OIDCScope "openid email"

  OIDCRedirectURI https://${REDIRECTDOMAIN}/redirect_uri

  ServerName ${SERVERNAME}
  ProxyPass / https://${SERVERNAME}/

  <Location "/">
    AuthType openid-connect
    #Require claim sub:email
    Require valid-user
    RequestHeader set Authorization "Bearer %{HTTP_OIDC_ACCESS_TOKEN}e" env=HTTP_OIDC_ACCESS_TOKEN
    LogLevel debug
  </Location>

</VirtualHost>

Feel free to use the openidc.yaml as a starting point if deploying in Kubernetes.

 

Kubernetes API Server:

kube-apiserver.yaml

    - --oidc-issuer-url=https://keycloak_domain/auth/realms/demo
    - --oidc-client-id=kubernetes
    - --oidc-ca-file=/path/to/ca.pem
    - --oidc-username-claim=email

oidc-issuer-url

  • substitute keycloak_domain for the ip or domain to your keycloak server
  • substitute ‘demo’ for the keycloak realm you setup

oidc-client-id

  • same client id as is set in Apache

oidc-ca

  • this is a shared ca between kubernetes and keycloak

 

 

OK so congrats. You should now be able to hit the Kubernetes Swagger UI with Keycloak/OpenID Connect authentication

Screen Shot 2016-06-10 at 6.08.47 PM

And you might be thinking to yourself about now, why the hell would I authenticate to Kubernetes through a web console?

Well remember how the kube-proxy can proxy requests through the Kubernetes API endpoint to various UIs like say Kube-UI. Tada. Now you can secure them properly.

Today all we have done is build authentication. Albeit pretty cool cause we have gotten ourselves out of statically managed Tokens, Certs or Basic Auth. But we haven’t factored in Authorization. In a future post, we’ll look at authorization and how to do it dynamically through webhooks.