mongo 3.6.9 in Kubernetes (config only)

## Generate a key
# openssl rand -base64 741 > mongodb-keyfile
## Create k8s secrets
# kubectl create secret generic mongo-key --from-file=mongodb-keyfile
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
    namespace: metacenter
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
  namespace: metacenter
spec:
  serviceName: "mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3.6.9
          env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          command:
          - /bin/sh
          - -c
          - >
            if [ -f /data/db/admin-user.lock ]; then
              mongod --bind_ip 127.0.0.1,$(MY_POD_IP) --replSet rs0 --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
            else
              mongod --auth --bind_ip 127.0.0.1,$(MY_POD_IP);
            fi;
          lifecycle:
            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - >
                  if [ ! -f /data/db/admin-user.lock ]; then
                    sleep 5;
                    touch /data/db/admin-user.lock
                    if [ "$HOSTNAME" = "mongo-0" ]; then
                      mongo --eval 'db = db.getSiblingDB("admin"); db.createUser({ user: "admin", pwd: "<redacted>", roles: [{ role: "root", db: "admin" }]});';
                    fi;
                    mongod --shutdown;
                  fi;
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-key
              mountPath: "/etc/secrets-volume"
              readOnly: true
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
            - name: KUBE_NAMESPACE
              value: metacenter
            - name: MONGODB_USERNAME
              value: <redacted>
            - name: MONGODB_PASSWORD
              value: <redacted>
            - name: MONGODB_DATABASE
              value: <redacted>
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: mongo
      volumes:
      - name: mongo-key
        secret:
          defaultMode: 0400
          secretName: mongo-key
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 30Gi

Kubernetes – PodPresets

Podpresets in Kubernetes are a cool new addition to container orchestration in v1.7 as an alpha capability. At first they seem relatively simple but when I began to realize their current AND potential value, I came up with all kinds of potential use cases.

Basically Podpresets inject configuration into pods for any pod using a specific Kubernetes label. So what does this mean? Have a damn good labeling strategy. This configuration can come in the form of:

  • Environment variables
  • Config Maps
  • Secrets
  • Volumes/Volumes Mounts

Everything in a PodPreset configuration will be appended to the pod spec unless there is a conflict, in which case the pod spec wins.

Benefits:

  • Reusable config across anything with the same service type (datastores as an example)
  • Simplify Pod Spec
  • Pod author can simply include PodPreset through labels

 

Example Use Case: What IF data stores could be configured with environment variables. I know, wishful thinking….but we can work around this. Then we could setup a PodPreset for MySQL/MariaDB to expose port 3306, configure for InnoDB storage engine and other generic config for all MySQL servers that get provisioned on the cluster.

Generic MySQL Pod Spec:

apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    app: mysql-server
    preset: mysql-db-preset
spec:
  containers:
    - name: mysql
      image: mysql:8.0
      command: ["mysqld"]
  initContainers:
  - name: init-mysql
    image: initmysql
    command: ['script.sh']

Now notice there is an init container in the pod spec. Thus no modification of the official MySQL image should be required.

The script executed in the init container could be written to templatize the MySQL my.ini file prior to starting mysqld. It may look something like this.

#!/bin/bash

cat >/etc/mysql/my.ini <<EOF

[mysqld]

# Connection and Thread variables

port                           = $MYSQL_DB_PORT
socket                         = $SOCKET_FILE         # Use mysqld.sock on Ubuntu, conflicts with AppArmor otherwise
basedir                        = $MYSQL_BASE_DIR
datadir                        = $MYSQL_DATA_DIR
tmpdir                         = /tmp

max_allowed_packet             = 16M
default_storage_engine         = $MYSQL_ENGINE
...

EOF

 

Corresponding PodPreset:

kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
  name: mysql-db-preset
  namespace: somenamespace
spec:
  selector:
    matchLabels:
      preset: mysql
  env:
    - name: MYSQL_DB_PORT
      value: "3306"
    - name: SOCKET_FILE
      value: "/var/run/mysql.sock"
    - name: MYSQL_DATA_DIR
      value: "/data"
    - name: MYSQL_ENGINE
      value: "innodb"

 

This was a fairly simple example of how MySQL servers might be implemented using PodPresets but hopefully you can begin to see how PodPresets can abstract away much of the complex configuration.

 

More ideas –

Standardized Log configuration – Many large enterprises would like to have a logging standard. Say something simply like all logs in JSON and formatted as key:value pairs. So what if we simply included that as configuration via PodPresets?

Default Metrics – Default metrics per language depending on the monitoring platform used? Example: exposing a default set of metrics for Prometheus and just bake it in through config.

 

I see PodPresets being expanded rapidly in the future. Some possibilities might include:

  • Integration with alternative Key/Value stores
    • Our team runs Consul (Hashicorp) to share/coordinate config, DNS and Service Discovery between container and virtual machine resources. It would be awesome to not have to bake in envconsul or consul agent to our docker images.
  • Configuration injection from Cloud Providers
  • Secrets injection from alternate secrets management stores
    • A very similar pattern for us with Vault as we use for Consul. One single Secrets/Cert Management store for container and virtual machine resources.
  • Cert injection
  • Init containers
    • What if Init containers could be defined in PodPresets?

I’m sure there are a ton more ways PodPresets could be used. I look forward to seeing this progress as it matures.

 

@devoperandi

MySQL + Galera + Xtrabackup (Errors and Fixes)

MySQL 5.6.21
Galera 25.3.5
Percona-xtrabackup-21 2.1.9-746-1.precise

Got error: 1047

Fix:
Database is probably in ‘initialized’ state. need to restart the service and check configuration.

 

WSREP_SST: [ERROR] Error while getting data from donor node:  exit codes: 1 0 (20150325 09:17:28.755)
WSREP_SST: [ERROR] Cleanup after exit with status:32 (20150325 09:17:28.756)
WSREP_SST: [INFO] Removing the sst_in_progress file (20150325 09:17:28.759)
2015-03-25 09:17:28 6459 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address 'ip_address' --auth 'user:password' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '6459' --binlog 'binlog' : 32 (Broken pipe)
2015-03-25 09:17:28 6459 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-03-25 09:17:28 6459 [ERROR] WSREP: SST failed: 32 (Broken pipe)
2015-03-25 09:17:28 6459 [ERROR] Aborting
2015-03-25 09:17:28 6459 [Warning] WSREP:State transfer failed: -2 (No such file or directory)
2015-03-25 09:17:28 6459 [ERROR] WSREP: gcs/src/gcs_group.c:gcs_group_handle_join_msg():723: Will never receive state. Need to abort.

Fix:
Check innodb_data_file_path=ibdata1:256M:autoextend does not exist in my.cnf
Check that xtrabackup isn’t still running orphaned on either node “ps aux | grep mysql” on donor
Check apt-get install persona-xtrabackup-21 is installed (All servers should be running the same version)
Check ssh keys are correct and servers can ssh freely between each other
Make sure sst_in_progress file does not exist on the joining server
Check that the node joining has an /etc/hosts file entry for the donor (or DNS is good)
Check the wsrep_sst_method on the donor at runtime is xtrabackup-v2
Make sure the root:password in /etc/mysql/my.cnf is the same as what the localhost is or xtrabackup can’t use it and will bomb

 

[ERROR] Unknown/unsupported storage engine: InnoDB 2015-03-25 14:33:48 31707 [ERROR] Aborting

Fix:
ibdata and log files are probably not the same size as the ones on the donor server. If innodb_data_file_path is set, this could be causing the problem.

 

 Other interesting facts:

Syncing with xtrabackup:
– creating or dropping a database during sync will cause the node syncing to the cluster to drop