So for fun I decided to move the blog off of Heroku. Wait what? Why? It’s mainly because I find it’s important to learn things and a great way of doing that is to actually do something.
I’ve been using Docker in work settings for a while now - in fact recently deployed a sizable infrastructure. It made sense to try out Google Cloud Containers; massively overkill for this kind of blog. In fact I ended up calling the project ‘overkill’ in Google Cloud Console.
To do any of this - I needed to containerise the blog. Heroku does this transparently so I obviously needed to Dockefile it up - this was the initial version:
With a quick
docker run of this I had a blog running on localhost. Well that was trivial.
I use CloudFlare to front my blog. This is because I’m lazy but also because DNSSEC and the like being managed by them makes like a lot easier. They recently added a service called ‘Origin Certificates’ where they give you a certificate for free (but not on a public CA) that means you get end to end TLS. TLS is good… even for a for blog so we want to use that.
To make use of it, I create folder called
_docker and put the PEM/key files in it and create
my nginx.conf in that folder:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
With this I can then modify my Dockerfile:
1 2 3 4 5
So I make my overkill project in Google Cloud Console and then download gcloud SDK to my computer from the Google Cloud SDK Downloads page.
A note to the wise… do not use the apt-get version if you want to use the gcloud component installer. The curl approach while insecure (for lots of reasons) doesn’t ask for root so safer than that Homebrew onliner you’re used to on OSX.
Before I started on my computer, I went to the console and created a cluster of 2 VMs that were of the ‘small’ instance type - I don’t need dedicated resources for this.
Once all done I needed the kubectl command. That’s easy:
Now I have all the bits I need I can go ahead and build/push my container:
I had called my container cluster ‘overkill’ so next step was to get the credentials and run it:
1 2 3 4 5 6 7 8
Woah - that’s easy. I guess I want to see it from the outside world so let’s do that:
After a bit the
kubectl get services leepaio-blog gives me an external IP I go to
that IP (on https) in my browser. It works! Awesome.
Now I can just scale that to, say, 4 instances using:
Very quick/easy. I can see why people enjoy using this.
Those of you that use Kubernetes already will notice this isn’t far off the Hello World tutorial on the Kubernetes website. That’s because, honestly it’s not. A simple website with no storage requirements is as simple as Hello World.
With a system administrator hat on, I think I would like to see better Puppet integration. There’s some, but there’s a few missing pieces. That way users can define their infrastructure in code - Kubernetes is part of that story; but having part of your infrastructure defined in yaml files for Kubernetes and part of it defined in either Puppet or Chef means it’s disjointed and larger organisations are obsessed with that Single Pane of Glass marketing buzz phrase.