Open Source – Environment Operator

The day has finally come. Today we are announcing our open source project Environment Operator (EO).

Environment Operator is used throughout our project and has rapidly gained a name for itself as being well written and well thought out. Props go out to Simas Cepaitis, Cristian Radu and Ben Somogyi who have all contributed.

At its core, EO enables a seamless application deployment capability for a given environment/namespace within Kubernetes.

Benefits/Features:

  • multi-cluster deployments
  • audit trail
  • status
  • consistent definition of customer environments
  • separate build from deploy
  • minimizes risk and scope of impact
  • simple abstraction from Kubernetes
  • BYO CI/CD
  • empowers our customers (dev teams)
  • API interface
  • multiple forms of authentication
  • deploy through yaml config and API
  • written in Go
  • Docker Registries

 

Multi-Cluster Deployments – With EO running in each namespace and exposed via an API, CI/CD pipelines can simply call the API endpoint regardless of Kubernetes cluster and deploy new services.

 

API Interface – EO has its own API endpoint for deployments, status, logs and the like. This combined with a yaml config for its environment is a very powerful combination.

 

Audit Trail – EO provides an audit trail of all changes to the environment through its logging to stdout.

 

Status – EO provides a /status endpoint by which to understand the status of an environment or individual services within the environment with /status/${service}

 

Separate Build from Deploy – What we found was, while our CI/CD pipeline is quite robust it lacked real-time feedback and audit capabilities needed by our dev teams. Separating our build from deploy allowed us to add in these additional features, simplify its use and enabled our dev teams to bring their own familiar pipelines to our project.

 

Minimize Risk and Scope of impact – Because EO runs inside the Kubernetes cluster we could limit its capabilities through Kubernetes service accounts to only its namespace. This limits risk and impact to other dev teams running in the same cluster as well as requiring a dev to call and entirely wrong API endpoint in order to effect another environment. Further more, authentication is setup for each EO, so separation of concerns between environments can easily be made.

 

Simple Abstraction – Because EO is so simple to use, it has enabled our teams to get up and running much faster in Kubernetes. Very little prior knowledge is required, they can use their same pipelines by using a common DSL in our Jenkins plugin and get all the real-time information all from one place per environment.

 

BYO CI/CD – I think this is pretty self-explanatory but we have many dev teams at Pearson that already have their own CI/CD pipelines. They can continue using their pipeline or choose to use ours.

 

Empower our Dev teams – Ultimately EO is about empowering Dev teams to manage their own environments without requiring tons of prior knowledge to get started. Simply deploy EO and go.

 

Authentication – EO currently supports two different kinds of authentication. Token based which gets pulled from a Kubernetes secret or OAuth. We currently tie directly into Keycloak for auth.

 

Plugin (DSL) for Jenkins – Because most of our Dev teams run Jenkins, we wrote a plugin to hook directly into it. Other plugins could very easily be written.

 

Docker Registries – EO can connect to private, public, gcloud and docker hub registries.

 

As you can see, Environment Operator has a fair amount of capabilities built-in but we aren’t stopping there.

Near term objectives:

  • Stateful sets
  • Kubernetes Jobs
  • Prometheus

 

Github:

https://github.com/pearsontechnology/environment-operator

https://github.com/pearsontechnology/environment-operator-jenkins-plugin

 

Let us know what you think!

@devoperandi

 

 

 

 

Kong API Gateway

 

Why an API gateway for micro services?

API Gateways can be an important part of the Micro Services / Serverless Architecture.

API gateways can assist with:

  • managing multiple protocols
  • working with an aggregator for web components across multiple backend micro services (backend for front-end)
  • reducing the number of round trip requests
  • managing auth between micro services
  • TLS termination
  • Rate Limiting
  • Request Transformation
  • IP blocking
  • ID Correlation
  • and many more…..

 

 

Kong Open Source API Gateway has been integrated into several areas of our platform.

Benefits:

  1. lightweight
  2. scales horizontally
  3. database backend only for config (unless basic auth is used)
  4. propagations happen quickly
    1. when a new configuration is pushed to the database, the other scaled kong containers get updated quickly
  5. full service API for configuration
    1. The API is far more robust than the UI
  6. supports some newer protocols like HTTP/2
    1. https://github.com/Mashape/kong/pull/2541
  7. Kong continues to work even when the database backend goes away
  8. Quite a few Authentication plugins are available for Kong
  9. ACLs can be utilized to assist with Authorization

Limitations for us (not necessarily you):

  • Kong doesn’t have enough capabilities around metrics collection and audit compliance
  • Its written in Lua (we have one guy with extensive skills in this area)
  • Had to write several plugins to add functionality for requirements

 

We run Kong as containers that are quite small in resource consumption. As an example, the gateway for our documentation site consumes minimal CPU and 50MB of RAM and if I’m honest we could probably reduce it.

Kong is anticipated to fulfill a very large deployment in the near future for us as one of our prime customers (internal Pearson development team) is also adopting Kong.

Kong is capable of supporting both Postgres and Cassandra as storage backends. I’ve chosen Postgres because Cassandra seemed like overkill for our workloads but both work well.

Below is an example of a workflow for a micro service using Kong.

In the diagram above, the request contains the /v1/lec URI for routing to the correct micro service from. In line with this request, Kong can trigger an OAuth workflow or even choose not to authenticate for specific URIs like /health or /status if need be. We use one type of authentication for webhooks and another for users as an example.

In the very near future we will be open sourcing a few plugins for Kong for things we felt were missing.

  • rewrite rules plugin
  • https redirect plugin (think someone else also has one)
  • dynamic access control plugin

all written by Jeremy Darling on our team.

We deploy Kong along side the rest of our container applications and scale them as any other micro service. Configuration is completed via curl commands to the Admin API.

The Admin API properly responds with both a http code (200 OK) and a json object containing the result of the call if its a POST request.

Automation has been put in place to enable us to configure Kong and completely rebuild the API Gateway layer in the event of a DR scenario. Consider this as our “the database went down, backups are corrupt, what do we do now?” scenario.

We have even taken it so far as to wrap it into a Universal API so Kong gateways can be configured across multiple geographically disperse regions that serve the same micro services while keeping the Kong databases separate and local to their region.

In the end, we have chosen Kong because it has the right architecture to scale, has a good number of capabilities one would expect in an API Gateway, it sits on Nginx which is a well known stable proxy technology, its easy to consume, is api driven and flexible yet small enough in size that we can choose different implementations depending on the requirements of the application stack.

Example Postgres Kubernetes config

apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
  - name: pgsql
    port: 5432
    targetPort: 5432
    protocol: TCP
  selector:
    app: postgres
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:9.4
          env:
            - name: POSTGRES_USER
              value: kong
            - name: POSTGRES_PASSWORD
              value: kong
            - name: POSTGRES_DB
              value: kong
            - name: PGDATA
              value: /var/lib/postgresql/data/pgdata
          ports:
            - containerPort: 5432
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: pg-data
      volumes:
        - name: pg-data
          emptyDir: {}

 

Kubernetes Kong config for Postgres

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
spec:
  ports:
  - name: kong-proxy
    port: 8000
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-ssl
    port: 8443
    targetPort: 8443
    protocol: TCP
  selector:
    app: kong
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kong-deployment
        app: kong
    spec:
      containers:
      - name: kong
        image: kong
        env:
          - name: KONG_PG_PASSWORD
            value: kong
          - name: KONG_PG_HOST
            value: postgres.default.svc.cluster.local
          - name: KONG_HOST_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
        command: [ "/bin/sh", "-c", "KONG_CLUSTER_ADVERTISE=$(KONG_HOST_IP):7946 KONG_NGINX_DAEMON='off' kong start" ]
        ports:
        - name: admin
          containerPort: 8001
          protocol: TCP
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-ssl
          containerPort: 8443
          protocol: TCP
        - name: surf-tcp
          containerPort: 7946
          protocol: TCP
        - name: surf-udp
          containerPort: 7946
          protocol: UDP

 

If you look at the configs above, you’ll notice it does not expose Kong externally. This is because we use Ingress Controllers so here is an ingress example.

Ingress Config example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    name: kong
  name: kong
  namespace: ***my-namespace***
spec:
  rules:
  - host: ***some_domain_reference***
    http:
      paths:
      - backend:
          serviceName: kong
          servicePort: 8000
        path: /
status:
  loadBalancer: {}

 

 

Docker container – Kong

https://hub.docker.com/_/kong/

 

Kubernetes Deployment for Kong

https://github.com/Mashape/kong-dist-kubernetes

 

@devoperandi