So for fun I decided to move the blog off of Heroku. Wait what? Why? It’s mainly because I find it’s important to learn things and a great way of doing that is to actually do something.

Docker

I’ve been using Docker in work settings for a while now - in fact recently deployed a sizable infrastructure. It made sense to try out Google Cloud Containers; massively overkill for this kind of blog. In fact I ended up calling the project ‘overkill’ in Google Cloud Console.

To do any of this - I needed to containerise the blog. Heroku does this transparently so I obviously needed to Dockefile it up - this was the initial version:

1
2
FROM nginx:stable-alpine
COPY public /usr/share/nginx/html

With a quick docker run of this I had a blog running on localhost. Well that was trivial.

TLS

I use CloudFlare to front my blog. This is because I’m lazy but also because DNSSEC and the like being managed by them makes like a lot easier. They recently added a service called ‘Origin Certificates’ where they give you a certificate for free (but not on a public CA) that means you get end to end TLS. TLS is good… even for a for blog so we want to use that.

To make use of it, I create folder called _docker and put the PEM/key files in it and create my nginx.conf in that folder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
user nginx;
worker_processes 1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
    }

    server {
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        ssl_certificate /etc/nginx/cert.pem;
        ssl_certificate_key /etc/nginx/cert.key;
        ssl_session_timeout 1d;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        ssl_prefer_server_ciphers on;

        add_header Strict-Transport-Security max-age=15768000;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
    }
}

With this I can then modify my Dockerfile:

1
2
3
4
5
FROM nginx:stable-alpine
COPY public /usr/share/nginx/html
COPY _docker/nginx.conf /etc/nginx/nginx.conf
COPY _docker/cert.pem /etc/nginx/cert.pem
COPY _docker/cert.key /etc/nginx/cert.key

Google Cloud

So I make my overkill project in Google Cloud Console and then download gcloud SDK to my computer from the Google Cloud SDK Downloads page.

A note to the wise… do not use the apt-get version if you want to use the gcloud component installer. The curl approach while insecure (for lots of reasons) doesn’t ask for root so safer than that Homebrew onliner you’re used to on OSX.

Before I started on my computer, I went to the console and created a cluster of 2 VMs that were of the ‘small’ instance type - I don’t need dedicated resources for this.

Once all done I needed the kubectl command. That’s easy:

1
$ gcloud components install kubectl

Now I have all the bits I need I can go ahead and build/push my container:

1
2
$ docker build gcr.io/PROJECT_ID/leepaio-blog:v1 .
$ gcloud docker push gcr.io/PROJECT_ID/leepaio-blog:v1

I had called my container cluster ‘overkill’ so next step was to get the credentials and run it:

1
2
3
4
5
6
7
8
$ gcloud container clusters get-credentials overkill
$ kubectl run leepaio-blog --image=gcr.io/PROJECT_ID/leepaio-blog:v1 --port=443
$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
leepaio-blog  1         1         1            1           1m
$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
leepaio-blog-2719256167-7z1cq  1/1       Running   0          1m

Woah - that’s easy. I guess I want to see it from the outside world so let’s do that:

1
$ kubectl expose deployment leepaio-blog --type="LoadBalancer"

After a bit the kubectl get services leepaio-blog gives me an external IP I go to that IP (on https) in my browser. It works! Awesome.

Now I can just scale that to, say, 4 instances using:

1
$ kubectl scale deployment leepaio-blog --replicas=4

Very quick/easy. I can see why people enjoy using this.

Closing thoughts

Those of you that use Kubernetes already will notice this isn’t far off the Hello World tutorial on the Kubernetes website. That’s because, honestly it’s not. A simple website with no storage requirements is as simple as Hello World.

With a system administrator hat on, I think I would like to see better Puppet integration. There’s some, but there’s a few missing pieces. That way users can define their infrastructure in code - Kubernetes is part of that story; but having part of your infrastructure defined in yaml files for Kubernetes and part of it defined in either Puppet or Chef means it’s disjointed and larger organisations are obsessed with that Single Pane of Glass marketing buzz phrase.