How to use Stackstorm AWS (boto3) pack to interact with AWS services

Introduction

Stackstorm (st2) is a platform which you can use as an automation framework. St2 has many features to support automating software development on-prem or cloud.

St2 packs are the units of content deployment and you can use different actions on pack(s) to create workflows.

In this blogpost, I’m going to demonstrate how to use st2 aws boto3 pack to manage services on aws.

Setup St2 environment

First of all you need to have a development environment with st2 installed. It is very easy to setup a vagrant box with st2. You can use the link below to setup the vagrant environment with st2 instantly.

st2vagrant

Also follow the Quick Start guide of st2 to get some hands on. You need to become aware of yourself about the concepts of pack, actions, etc.

Quick Start

St2 AWS pack

Once you setup the environment, it is super simple to install a pack. There are hundreds of packs available for you to download and you also can contribute to the pack development. Use the link stated below to search a pack based on your needs. For example, may be you want ansible pack to deal with ansible via st2.

Stackstorm Exchange

However, I want to install st2 aws pack in this occasion. You can simply search and copy the command to install the pack on stackstorm-exchange.

st2 pack install aws

This is one version of the aws pack which has lots of actions. The aws boto3 pack, by contrast, is more powerful and simple due to very valid reasons. The main reasons are simplicity and minimum or zero maintenance of the pack. Let’s assume that, boto3 action adds a new parameter in future, but aws.boto3action needs zero changes, i’d rather say that, this action is immune to the changes in boto3 action(s).

You can install the aws boto3 pack using the command below;

st2 pack install aws=boto3

In this pack, you just need only two actions to do anything on AWS that is why I said, it is very simple. Those two actions are;

  1. aws.assume_role
  2. aws.boto3action

The aws.assume_role action

This action is used to get the AWS credentials using AWS assume role. For example, see below code snippet.

assume_role:
action: aws.assume_role
input:
role_arn: <% $.assume_role %>
publish:
credentials: <% task(assume_role).result.result %>

In the preceding example, the aws.assume_role action has one input parameter which is AWS IAM role with proper permission. So basically, it needs to have required permission to perform various activities on AWS. For example, you may want to have ec2:* permission to manage EC2 related activities. It needs to have autoscaling:* permission to carry out autoscaling related activities so on and so forth.

Sample policy document for assume_role would be something like below;

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"iam:*",
"autoscaling:*",
"ssm:SendCommand",
"ssm:DescribeInstanceInformation",
"ssm:GetCommandInvocation"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

When you pass an IAM role as assume role it returns AWS credentials and that credentials is saved in a variable to use at a later time. See the credentials variable under publish tag in the preceding yaml code snippet.

The aws.boto3action action

Let’s assume we need to get the information regarding a Launch Configuration. As per boto3, the relevant action to get Launch Configuration information is, describe_auto_scaling_groups. See the link mentioned below;

http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.describe_auto_scaling_groups

The describe_auto_scaling_groups action comes under AutoScaling service. Once you know these information you can interact this boto3 action within st2 as stated below;

action: aws.boto3action
input:
region: <% $.region %>
service: autoscaling
action_name: describe_launch_configurations
params: <% dict(LaunchConfigurationNames => list($.lc_name)) %>
credentials: <% $.credentials %>

Let’s go through each input in detail.

region: The AWS region that you want to perform the action.
service: The AWS service category. For example, ec2, autoscaling, efs,
iam, kinesis, etc. Check all the available service in boto3 using the link below;

Available Services

action_name: Name of the boto3 action. In the above example, it is, describe_launch_configurations.

params: Input parameters for the action, describe_launch_configurations in this case. It has to be passed as a dict. You can refer the boto3 action to get a detailed explanation about the parameters and their data types. Please note that , lc_name is a input parameter which specifies the name of Launch Configuration.

You should feel free to poke around and see how this st2 aws boto3 pack works.

The aws.boto3action is used to execute relevant boto3 actions vis st2.

Summary

Six months ago, I started using st2. In fact Mike is the one who is responsible. He tasked me with database automation work using st2. So I definitely should thank him for that because I learned quite a bit.
The aws boto3 pack is designed with an eye towards the future, that is why it is protected from the changes in boto3 world which I believe is the most important factor when it comes software design. When you start using this pack, it will quickly become apparent how easy it is to use.

Discussion on the future of the AWS pack

Cheers!

My exit from a great gig

Today is my last day as a Principal Cloud Platform Architect at Pearson. Over the last few weeks I’ve had a bit of a retrospect on my time at Pearson. Our accomplishments, our failures, our team and all the opportunity it has created. I’m amazed how far we have come. I’m thankful for the opportunity and the freedom to reach beyond what many thought possible.

I left nothing on the table. I gave it all I had.

But one question still stands.

What IS Bitesize?

Bitesize is a platform purpose built for the future of Pearson.

Its a combination of interwoven technologies and application development methodologies that has resulted in advancements far beyond what any cloud provider can currently attain.

Its a team of engineers that believe, support and challenge themselves and others both in and outside the team.

Its a group of leaders that believe in their team and are willing to stick their neck out for what’s right.

Its a philosophy of change.

Its an evolving set of standards that increase fidelity of these interwoven pieces.

Bitesize is the convergence of disparate teams to make something greater than it could individually.

Its a treadmill of technology.

 

As I began thinking about what has been accomplished I decided to write down a few. I know I will leave many unsaid but hopefully this will give a view into just how much has been accomplished. Many may see this list and think one of two things: “That’s total bullshit, no way they did all that” or better yet “Ok I buy it, but have you reached nirvana?”

My answer to the former – “test your bullshit meter, its a bit off”

My answer to the latter – “as soon as one reaches a star, they realize how many other stars they want to reach”

 

A few achievements to date:

Fully automated platform and application upgrades

Completely scalable CI/CD pipeline treated as cattle (i.e. if they go away they self configure when they come back up)

Built-in log and data aggregation per Kubernetes namespace

Deep cloud provider integrations without lock-in

Fully automated database provisioning (containers and virtual machines)

Dynamic Certificate Management using Hashicorp Vault fully integrated with Load Balancers

100% availability for the platform and applications through critical business periods (to my knowledge this has not been achieved til now at Pearson)

Dynamic Application Configuration

Immutable Application architecture

OAuth into the platform and various infrastructure components

Universal API for single point of use

Audit Control and Compliance throughout the stack

Baked in Enterprise Governance

Highly secure, full BGP mesh cross geographic regions, capable of standing up new endpoints < 10 seconds

8.5 Million concurrent (and I do mean concurrent) user performance test, 150-250ms avg response

Enterprise Chargeback model

Dynamic CIDR provisioning (NSOT) for AWS and Kubernetes

Open Sourced Authz webhook resulting in its adoption by CoreOS

Automated generation of StackStorm AWS packs

Contributed StackStorm Kubernetes Pack to StackStorm Exchange

Contributing next generation (over 106 new packs) of StackStorm AWS Packs to StackStorm Exchange (currently in incubator)

Open Sourced many new technologies including Environment Operator, StackStorm Packs, Kong plugins, Kubernetes Test Harness, Nginx Controller, Jenkins plugin for Environment Operator, and CI/CD pipeline

On-Demand Locust (Perf testing suite) on Kubernetes using Iron Functions , deployed < 10 seconds

Integrated Monitoring/Alerting throughout the stack

Self-onboarding of applications through to production with little or no assistance from the Bitesize team

Congrats team. You’ve got this.
@devoperandi

StackStorm for Kubernetes just took a giant leap forward (beta)

 

came up with it one morning around 4am while trying to get the baby to sleep.

i’m pretty proud. mostly because it works 😉

 – Andy Moore

 

As many of you know, my team began integrating StackStorm with Kubernetes via ThirdPartyResources(TPR) which we showed off at KubeCon_London in March 2016. This was a great start to our integrations with Kubernetes and allowed us to expand our capabilities around managing datastores simply by posting a TPR to the Kubernetes API. Allowing StackStorm to build/deploy/manage our database clusters automatically.

This however only worked with ThirdPartyResources. In fact, it only worked with the ‘beta’ TPRs which were significantly revamped before making it into GA.

With that Andy Moore figured out how to automatically generate a StackStorm pack crammed full of exciting new capabilities for both StackStorm Sensors and Actions.

Link:

https://github.com/pearsontechnology/st2contrib/tree/bite-1162/packs/kubernetes

You will notice this has not been committed back upstream to StackStorm yet. Our latest version diverges significantly from the original pack we pushed and we need to work with the StackStorm team for the best approach to move forward.

@stackstorm if you want to help us out with this, we would be very appreciative.

screen-shot-2016-12-02-at-2-37-51-pm

The list of new capabilities for Kubernetes is simply astounding. Here are just a few:

Authentication
RBAC
HorizontalPodAutoscalers
Batch Jobs
CertificateSigningRequests
ConfigMaps
PersistentVolumes
Daemonsets
Deployments/DeploymentRollBack
Ingress
NetworkPolicy
ThirdPartyResources
StorageClasses
Endpoints
Secrets

Imagine being able to configure network policies through an automated StackStorm workflow based on a particular projects needs.

Think about how RBAC could be managed using our Kubernetes Authz Webhook through StackStorm.

Or how about kicking of Kubernetes Jobs to Administer some cluster level cleanup activity but handing that off to your NOC.

Or allowing your Operations team to patch a HorizontalPodAutoscaler through a UI.

We could build a metadata framework derived from the Kubernetes API annotations/labels for governance.

The possibilities are now literally endless. Mad props go out to Andy Moore for all his work in this endeavor.

 

Ok so why am I listing this as beta?

There is a freak ton of capabilities in our new st2 pack that we haven’t finished testing. So if you are adventurous, want to play with something new and can help us, we would love your feedback.

Thus far our testing has included the following:

Secrets

Services

Deployments

Ingresses

Physical Volumes

Replication Controllers

Quotas

Service Accounts

Namespaces

Volumes

 

Hope you get as excited about this as we are. We now have a way to rapidly integrate Kubernetes with ….. well …… everything else.

@devoperandi

 

Note: As soon as we have cleaned up a few things with the generator for this pack, we’ll open source it to the community.

 

Past Blogs around this topic:

KubeCon 2016 Europe (Slides)

 

 

Kubernetes, StackStorm and third party resources

 

Kubernetes, StackStorm and third party resources – Part 2

 

 

Kubernetes: A/B Cluster Deploys

Everything thing mentioned has been POCed and proven to work so far in testing. We run an application called Pulse in which we demoed it staying up throughout this A/B migration process.

Recently the team went through an exercise on how to deploy/manage a complete cluster upgrade. There were a couple options discussed along with what it would take to accomplish both.

  • In-situ upgrade – the usual
  • A/B upgrade –  a challenge

In the end, we chose to move forward with A/B upgrades. Keeping in mind the vast majority of our containers are stateless and thus quite easy to move around. Stateful containers are a bit bigger beast but we are working on that as well.

We fully understand A/B migrations will be difficult and In-situ will be required at some point but what the hell. Why not stretch ourselves a bit right?

So here is the gist:

Build a Terraform/Ansible code base that can deploy an AWS VPC with all the core components. Minus the databases in this picture this is basically our shell. Security groups, two different ELBs for live and pre-live, a bastion box, subnets and routes, our dns hosted zone and a gateway.

Screen Shot 2016-08-05 at 1.05.59 PM

This would be its own Terraform apply. Allowing our Operations folks to manage security groups, some global dns entries, any VPN connections, bastion access etc etc without touching the actual Kubernetes clusters.

We would then have a separate Terraform apply that will stand up what we call paasA. Which includes an Auth server, our Kubernetes nodes for load balancing (running ingress controllers), master-a, and all of our minions with the kubernetes ingress-controllers receiving traffic through frontend-live ELB.

Screen Shot 2016-08-05 at 1.06.31 PM

Once we decide to upgrade, we would spin up paasB. which is essentially a duplicate of paasA running within the same VPC.

Screen Shot 2016-08-05 at 1.06.41 PM

When paasB comes up, it gets added to the frontend pre-live ELB for smoke testing, end-to-end testing and the like.

Once paasB is tested to our satisfaction, we make the switch to the live ELB while preserving the ability to switch back if we find something major preventing a complete cut-over.

Screen Shot 2016-08-05 at 1.06.54 PM

We then bring down paasA and wwwaaaahhhllllllaaaaaa, PaaS upgrade complete.

Screen Shot 2016-08-05 at 1.07.25 PM

Now I think its obvious I’m way over simplifying this so lets get into some details.

ELBs – They stay up all the time. Our Kubernetes minions running nginx-controllers get labelled in AWS. So we can quickly update the ELBs whether live or prelive to point at the correct servers.

S3 buckets – We store our config files and various execution scripts in S3 for configuring our minions and the master. In this A/B scenario, each Cluster (paasA and paas) have their own S3 bucket where their config files are stored.

Auth Servers – Our paas deploys include our Keycloak Auth servers. We still need to work through how we transfer all the configurations or IF we choose to no longer deploy auth servers as apart of the Cluster deploy but instead as apart of the VPC.

Masters – New as apart of cluster deploy in keeping with true A/B.

Its pretty cool to run two clusters side by side AND be able to bring them up and down individually. But where this gets really awesome is when we can basically take all applications running in paasA and deploy them into paasB. I’m talking about a complete migration of assets. Secrets, Jenkins, Namespaces, ACLs, Resource Quotas and ALL applications running on paasA minus any self generated items.

To be clear, we are not simply copying everything over. We are recreating objects using one cluster as the source and the other cluster as the destination. We are reading JSON objects from the Kubernetes API and using those objects along with their respective configuration to create the same object in another cluster. If you read up on Ubernetes, you will find their objectives are very much in-line with this concept. We also have ZERO intent of duplicating efforts long term. The reality is, we needed this functionality before the Kubernetes project could get there. As Kubernetes federation continues to mature, we will continue to adopt and change. Even replacing our code with theirs. With this in mind, we have specifically written our code to perform some of these actions in a way that can very easily be removed.

Now you are thinking, why didn’t you just contribute back to the core project? We are in several ways. Just not this one because we love the approach the Kubernetes team is already taking with this. We just needed something to get us by until they can make theirs production ready.

Now with that I will say we have some very large advantages that enable us to accomplish something like this. Lets take Jenkins for example. We run Jenkins in every namespace in our clusters. Our Jenkins machines are self-configuring and for the most part stateless. So while we have to copy infrastructure level items like Kubernetes Secrets to paasB, we don’t have to copy each application. All we have to do is spin up the Jenkins container in each namespace and let them deploy all the applications necessary for their namespace. All the code and configuration to do so exists in Git repos. Thus PaaS admins don’t need to know how each application stack in our PaaS is configured. A really really nice advantage.

Our other advantage is, our databases currently reside outside of Kubernetes (except some mongo and cassandra containers in dev) on virtual machines. So we aren’t yet worried about migration of stateful data sets thus it has made our work around A/B cluster migrations a much smaller stepping stone. We are however placing significant effort into this area. We are getting help from the guys at Jetstack.io around storage and we are working diligently with people like @chrislovecnm to understand how we can bring database containers into production. Some of this is reliant upon new features like Petsets and some of it requires changes in how various databases work. Take for example Cassandra snitches where Chris has managed to create a Kubernetes native snitch. Awesome work Chris.

So what about Consul, its stateful right? And its in your cluster yes?

Well that’s a little different animal. Consul is a stateful application in that it runs in a cluster. So we are considering two different ways by which to accomplish this.

  1. Externalize our flannel overlay network using aws-vpc and allow the /16s to route to one another. Then we could essentially create one Consul cluster across two Kubernetes clusters, allow data to sync and then decommission the consul containers on the paasA.
  2. Use some type of small application to keep two consul clusters in sync for a period of time during paas upgrade.

Both of the options above have benefits and limitations.

Option 1:

  • Benefits:
    • could use a similar method for other clustered applications like Cassandra.
    • would do a better job ensuring the data is synced.
    • could push data replication to the cluster level where it should be.
  • Limitations:
    • we could essentially bring down the whole Consul cluster with a wrong move. Thus some of the integrity imagined in a full A/B cluster migration would be negated.

Option 2:

  • Benefits:
    • keep a degree of separation between each Kubernetes cluster during upgrade so one can not impact the other.
    • pretty easy to implement
  • Limitations:
    • specific implementation
    • much more to keep track of
    • won’t scale for other stateful applications

I’m positive the gents on my team will call me out on several more but this is what I could think off off the top.

We have already implemented Option #2 in a POC of our A/B migration.

But we haven’t chosen a firm direction with this yet. So if you have additional insight, please comment back.

Barring stateful applications, what are we using to migrate all this stuff between clusters? StackStorm. We already have it performing other automation tasks outside the cluster, we have python libraries k8sv1 and k8sv1beta for the Kubernetes API endpoints and its quite easy to extract the data and push it into another cluster. Once we are done with the POC we’ll be pushing this to our stackstorm repo here. @peteridah you rock.

In our current POC, we migrate everything. In our next POC, we’ll enable the ability to deploy specific application stacks from one cluster to another. This will also provide us the ability to deploy an application stack from one cluster into another for things like performance testing or deep breach management testing.

Lastly we are working through how to go about stateful container migrations. There are many ideas floating around but we would really enjoy hearing yours.

For future generations:

  • We will need some sort of metadata framework for those application stacks that span multiple namespaces to ensure we duplicate an environment properly.

 

To my team-

My hat off to you. Martin, Simas, Eric, Yiwei, Peter, John, Ben and Andy for all your work on this.

Kubernetes, StackStorm and third party resources – Part 2

Alright, finally ready to talk about some StackStorm in depth. In the first part of this post I discussed some depth around Kubernetes Thirdpartyresource and how excited we are to use them. If you haven’t read it I would go back and start there. I however breezed over the Event Drive Automation piece (IFTTT) involving StackStorm. I did this for two reasons: 1) I was terribly embarrassed by the code I had written and 2) It wasn’t anywhere near where it should be in order for people to play with it.

Now that we have a submittal to StackStorm st2contrib, I’m going to open this up to others. Granted its not in its final form. In fact there is a TON left to do but its working and we decided to get the community involved should you be interested.

But first lets answer the question that is probably weighing on many peoples mind. Why StackStorm? There are many other event driven automation systems. The quick answer is they quite simply won us over. But because I like bullet points. Here are a few:

  1. None of the competition were in a position to work with, support or develop a community around IFTTT integration with Kubernetes.
  2. StackStorm is an open frame work that you and I can contribute back to.
  3. Its built on OpenStack’s Mistral Workflow engine so while this is a startup like company, the foundation of what they are doing has been around for quite some time.
  4. Stable
  5. Open Source code base. (caveat: there are some components that are enterprise add-ons that are not)
  6. Damn their support is good. Let me just say, we are NOT enterprise customers of StackStorm and I personally haven’t had better support in my entire career. Their community slack channel is awesome. Their people are awesome. Major props on this. At risk of being accused of getting a kick-back, I’m a groupie, a fanboy. If leadership changes this (I’m looking at you Mr. Powell), I’m leaving. This is by far and away their single greatest asset. Don’t get me wrong, the tech is amazing but the people got us hooked.

For the record, I have zero affiliation with StackStorm. I just think they have something great going on.

 

As I mentioned in the first post, our first goal was to automate deployment of AWS RDS databases from the Kubernetes framework. We wanted to accomplish this because then we could provide a seamless way for our dev teams to deploy their own database with a Kubernetes config based on thirdpartyresource (currently in beta).

Here is a diagram of the magic:

Screen Shot 2016-02-13 at 5.48.07 PM

Alright here is what’s happening.

  1. We have a StackStorm Sensor watching the Kubernetes API endpoint at /apis/extensions/v1beta1/watch/thirdpartyresources for events. thirdpartyresource.py
  2. When a new event happens, the sensor picks it up and kicks off a trigger. Think of a trigger like a broadcast message within StackStorm.
  3. Rules listen to trigger types that I think of as channels. Kind of like a channel on the telly. A rule based on some criteria, decides whether or not to act on any given event. It either chooses to drop the event or to take action on it. rds_create_db.yaml
  4. An action-chain then performs a series of actions. Those actions can either fail or succeed and additional actions happen based on the result of the last. db_create_chain.yaml
    1. db_rds_spec munges the data of the event. Turns it into usable information.
    2. from there rds_create_database reaches out to AWS and creates an RDS database.
    3. And finally configuration information and secrets are passed back to Consul and Vault for use by the application(s) within Kubernetes. Notice how the action for Vault and Consul are grey. Its because its not done yet. We are working on it this sprint.

Link to the StackStorm Kubernetes Pack. Take a look at the Readme for information on how to get started.

Obviously this is just a start. We’ve literally just done one thing in creating a database but the possibilities are endless for integrations with Kubernetes.

I mentioned earlier the StackStorm guys are great. And I want to call a couple of them out. Manas Kelshikar, Lakshmi Kannan and Patrick Hoolboom. There are several others that have helped along the way but these three helped get this initial pack together.

The initial pack has been submitted as a pull request to StackStorm for acceptance into st2contrib. Once the pull request has been accepted to st2contrib, I would love it if more people in the Kubernetes community got involved and started contributing as well.

 

@devoperandi

 

Kubernetes, StackStorm and third party resources

WARNING!!! Beta – Not yet for production.

You might be thinking right now, third party resources? Seriously? With all the amazing stuff going on right now around Kubernetes and you want to talk about that thing at the bottom of the list. Well keep reading, hopefully by the end of this, you too will see the light.

Remember last week when I talked about future projects in my Python Client For Kubernetes blog? Well here it is. One key piece of our infrastructure is quickly becoming StackStorm.

What’s StackStorm you might ask? StackStorm is an open source event driven automation system which hailed originally from OpenStack’s Mistral workflow project. In fact, some of its libraries are from mistral but its no longer directly tied to OpenStack. Its a standalone setup that rocks. As of the time of this writing, StackStorm isn’t really container friendly but they are working to remediate this and I expect a beta to be out in the near future. Come on guys, hook a brother up.

For more information on StackStorm – go here.

I’ll be the first to admit, there documentation took me a little while to grok. Too many big words and not enough pics to describe what’s going on. But once I got it, nothing short of meeting Einstein could have stopped my brain from looping through all the possibilities.

Lets say, we want to manage an RDS database from Kubernetes. We should be able to create, destroy, configure it in conjunction with the application we are running and even more importantly, it must be a fully automated process.

So what’s it take to accomplish something like this? Well in our minds we needed a way to present objects externally i.e. third party resources and we need some type of automation that can watch those events and act on them ala StackStorm.

Here is a diagram of our intentions: We have couple loose ends to complete but soon we’ll be capable of performing this workflow for any custom resource. Database just happens to be the first requirement we had that fit the bill.

Screen Shot 2016-02-05 at 8.24.51 PM

In the diagram above we are perform 6 basic actions.

– Input thirdpartyresource to Kubernetes

– StackStorm watches for resources created, deleted OR modifed

– If trigger – makes call to AWS API to execute an event

– Receives back information

– On creation or deletion, adds or remove necessary information from Vault and Consul

 

Alright from the top, what is a third party resource exactly? Well its our very own custom resource. Kind of like a pod, endpoint or replication controller are API resources but now we get our own.

Third Party Resources immediately stood out to us because we now have the opportunity to take advantage of all the built-in things Kubernetes provides like metadata, labels, annotations, versioning, api watches etc etc while having the flexibility to define what we want in a resource. What’s more, third party resources can be grouped or nested.

Here is a an example of a third party resource:

metadata:
  name: mysql-db.prsn.io
  labels:
    resource: database
    object: mysql
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
description: "A specification of database for mysql"
versions:
  - name: stable/v1

This looks relatively normal with one major exception. The metadata.name = mysql-db.prsn.io. I’ve no idea why but you must have a fully qualified domain in the name in order for everything to work properly. The other oddity is the “-“. It must be there and you must have one. Something to do with <CamelCaseKind>.

Doing this creates

/apis/prsn.io/stable/v1/namespaces/<namespace>/mysqls/...

By creating the resource above, we have essentially created our very own api endpoint by which to get all resources of this type. This is awesome because now we can create mysql resources and watch them under one api endpoint for consumption by StackStorm.

Now imagine applying a workflow like this to ANYTHING you can wrap your head around. Cool huh?

Remember this is beta and creating resources under the thirdpartyresource (in this case mysqls) requires a little curl at this time.

{
   "metadata": {
     "name": "my-new-mysql-db"
   },
   "apiVersion": "prsn.io/stable/v1",
   "kind": "MysqlDb",
   "engine_version": "5.6.23",
   "instance_size": "huge"
}

There are three important pieces here. 1) its json. 2) apiVersion has the FQDN + versions.name for the thirdpartyresource. 3) kind = MysqlDb <CamelCaseKind>

Now we can curl the Kubernetes api and post this resource.

curl -d "{"metadata":{"name":"my-new-mysql-database"},"apiVersion":"prsn.io/stable/v1","kind":"MysqlDb","engine_version":"5.6.23","instance_size":"huge"}" https://kube_api_url

 

Now if you hit you kubernetes api endpoint you should see something like this:

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/prsn.io",
    "/apis/prsn.io/stable/v1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/resetMetrics",
    "/swagger-ui/",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

Our very own Kubernetes endpoint now in /apis/prsn.io/stable/v1.

And here is a resource under the mysql thirdpartyresource located at:

/apis/prsn.io/stable/v1/mysqldbs
{
  "kind": "MysqlDb",
  "items": [
    {
      "apiVersion": "prsn.io/stable/v1",
      "kind": "MysqlDb",
      "metadata": {
        "name": "my-new-mysql-db",
        "namespace": "default",
        "selfLink": "/apis/prsn.io/stable/v1/namespaces/default/mysqldbs/my-new-mysql-db"
        ...
      }
    }
  ]
}

If your mind isn’t blown by this point, move along, I’ve got nothin for ya.

 

Ok on to StackStorm.

Within StackStorm we have a Sensor that watches the Kubernetes api for a given third party resource. In this example, its looking for MysqlDb resources. From there it compares the list of MysqlDb resources against a list of mysql databases (rds in this case) that exist and determines what/if any actions it needs to perform. The great thing about this is StackStorm already has quite a number of what they call packs. Namely an AWS pack. So we didn’t have to do any of the heavy lifting on that end. All we had to do was hook in our python client for Kubernetes, write a little python to compare the two sets of data and trigger actions based off the result.

AWS/StackStorm Pack

It also has a local datastore so if you need to store key/value pairs for any length of time, that’s quite easy as well.

Take a look at the bottom of this page for operations against the StackStorm datastore.

We’ll post our python code as soon as it makes sense. And we’ll definitely create a pull request back to the StackStorm project.

Right now we are building the workflow to evaluate what actions to take. We’ll update this page as soon as its complete.

 

If you have questions or ideas on how else to use StackStorm and ThirdPartyResources, I would love to hear about them. We can all learn from each other.

 

 

@devoperandi

 

Other beta stuff:

deployments – https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/deployment.md

horizontalpodautoscaler – https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/horizontal-pod-autoscaler.md

ingress – http://kubernetes.io/v1.1/docs/user-guide/ingress.html

Which to be fair I have talked about this in the blog about Load Balancing

jobs – https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/jobs.md

 

 

No part of this blog is sponsored or paid for by anyone other than the author.