This falls under the #protip category of things…
If you’re using the Docker Beta for Windows (the Hyper-V one) and you find
that you can’t do any
docker pull - try disabling Malicious Website
Protection in your Anti-virus/Anti-Malware suite. Yep - turns out these things
which filter traffic might actually cause a problem for you!
People kept thinking it is to do with IPv6 on the forums - it’s not.
It’s now just annoying that I have to keep turning it on after doing dev. work.
Now that I have the blog up on Google Container Engine I realise it’s very basic. That’s fine - but really now we followed the tutorial style way of getting it online using nginx (with the TLS offloaded there) we should really migrate it to a proper Load Balancer where Google can offload the TLS and we can take proper advantage of their Global Load Balancer like platform.
I wanted to do this without stopping anything - just like I would with something in production. So here goes…
First and foremost we have to upload our Certificate/Key combo to Kubernetes via the create secret command.
Of note and really important is that the files must be called tls.crt and tls.key! I found this out the hard way - it is documented but not very clearly.
Open up port 80 on the nodes
To do this I needed to edit the service. This was the bit I got confused about initially as the ‘service’ also describes the LoadBalancer. Turns out this is fine and we’ll later come back and make it not be an external LoadBalancer. For now:
1 2 3 4 5 6 7 8 9 10 11
To set up multiple ports you have to name them. The reason we had port 443 was because initially we set up the blog with the SSL to be terminated on the containers. This isn’t very efficient but we need it to remain while we work.
We don’t change the type at this point (i.e. it stays as a LoadBalancer) so that we keep things up and running on the current IP.
Once we’ve done this we need to create a firewall rule to allow Ingress services, as they are known, to reach the nodes. This is done using the gcloud tool:
The 22.214.171.124/22 range is the range given to Load Balancers in Google Cloud. It’s the source range where all things will arrive from.
Once we have this we can set up our Ingress service! We have to use a .yaml file for this:
1 2 3 4 5 6 7 8 9 10
You’ll notice it’s not very complicated. We merely reference the secret we already created as well as the service we want to reach. To apply this we use the apply command:
Easy! If you have everything right…
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Now… Things to note. If you don’t see the https-forwarding-rule line - then your certificate was unable to load! It will still list the TLS section, which is a bit irritating.
So now I can verify that I can reach my blog via the IP I’ve got from the ingress service and that works just fine.
So, a quick update to Cloudflare and we’re off.
Small Nginx Tweak
I use the Rules Engine in Cloudflare to redirect from non-TLS to TLS. But if you can’t:
1 2 3 4
You’ll be pleased to hear that X-Forwarded-Proto is set by Google and you can use it!
Remove the old IP
Obviously I still need the service but I now get to remove the public IP - that’s as easy as changin the Type of the Service.
1 2 3 4 5 6 7 8 9 10 11 12
If you want you can also remove the blog-tls entries but I didn’t get round to that yet.
Once again this applied using the kubectl apply command. Once this done a quick descripe service will show that there are no public IPs up anymore.
That was fun! It was interesting doing this all from Windows (still not supported, technically) and getting it all working wasn’t that hard. There’s a few issues with the kubectl ‘edit’ commands but nothing that couldn’t be worked around.
The thing I come up against a lot is that everything is really well documented; however the guides seem all over the place. The documentation for the yaml files is ‘written by an engineer for an engineer’ and hopefully good breakdowns will start to exist more and more as people use it!
So - why not?
I recently replaced my Macbook Air with a shiny Surface Pro 4. There’s lots of downsides of doing such a thing - but also lots of ups.
The form factor is awesome. Touch Screens are great and properly made displays are great. I think Apple have some catching up to do. But…
Suddenly I can’t do some things I’m used to doing. I need to fix that… part 1 is being able to post a blog entry from Windows. Now this sounds like it should be really simple. It’s not.
My blog runs on an older version of Octopress - lots of reasons for this and I don’t have the time to spend changing to something else. I actually like the workflow and I shouldn’t have to change it just because I changed OS.
To do this I needed the Windows Insider Preview. Worth noting you don’t need that to do the bit where I deploy the change to Google Cloud.
Bash for Windows and Octopress
This turned out to be a bit of a pain. Bash for Windows sets its umask to 0777 - thanks Windows! Bundler does not appreciate this. Also Bash for Windows is an Ubuntu 14.04 run time - and I’ll be needing Ruby versions.
1 2 3 4 5 6 7 8 9 10
With that sorted I can now do the harder bit:
1 2 3
Wait - why am I specifying the –path here? Okay, first, it’s just sensible to keep the deps somewhere else other than the main gem population. Secondly and this bit is more important The /mnt/c/ is all 0777 perms and bundler will correctly not appreciate that.
Once this is done I can go ahead and do my normal post creation.
Right, with that done I need to deploy this!
Docker Public Beta and Google Cloud SDK
This is a match made in heaven. First of all I enabled Hyper-V and installed the latest Docker Beta for Windows (which uses Hyper-V instead of VirtualBox). I then go ahead and install the Google SDK and the kubectl command using the commands provided in my previous post.
Snag: You need to have HOME set. Set it to your /Users/foo folder where foo is your username. It’s the default directory when you open a new command prompt so that shouldn’t be too difficult.
If you don’t set it the gcloud get credentials command will tell you that it needs to be set.
Now with all the done:
1 2 3 4
That’s it! Wrote a blog post on a blog generator that doesn’t work on Windows using the Container Engine from Google that doesn’t support Windows - all without leaving Windows.
So for fun I decided to move the blog off of Heroku. Wait what? Why? It’s mainly because I find it’s important to learn things and a great way of doing that is to actually do something.
I’ve been using Docker in work settings for a while now - in fact recently deployed a sizable infrastructure. It made sense to try out Google Cloud Containers; massively overkill for this kind of blog. In fact I ended up calling the project ‘overkill’ in Google Cloud Console.
To do any of this - I needed to containerise the blog. Heroku does this transparently so I obviously needed to Dockefile it up - this was the initial version:
With a quick
docker run of this I had a blog running on localhost. Well that was trivial.
I use CloudFlare to front my blog. This is because I’m lazy but also because DNSSEC and the like being managed by them makes like a lot easier. They recently added a service called ‘Origin Certificates’ where they give you a certificate for free (but not on a public CA) that means you get end to end TLS. TLS is good… even for a for blog so we want to use that.
To make use of it, I create folder called
_docker and put the PEM/key files in it and create
my nginx.conf in that folder:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
With this I can then modify my Dockerfile:
1 2 3 4 5
So I make my overkill project in Google Cloud Console and then download gcloud SDK to my computer from the Google Cloud SDK Downloads page.
A note to the wise… do not use the apt-get version if you want to use the gcloud component installer. The curl approach while insecure (for lots of reasons) doesn’t ask for root so safer than that Homebrew onliner you’re used to on OSX.
Before I started on my computer, I went to the console and created a cluster of 2 VMs that were of the ‘small’ instance type - I don’t need dedicated resources for this.
Once all done I needed the kubectl command. That’s easy:
Now I have all the bits I need I can go ahead and build/push my container:
I had called my container cluster ‘overkill’ so next step was to get the credentials and run it:
1 2 3 4 5 6 7 8
Woah - that’s easy. I guess I want to see it from the outside world so let’s do that:
After a bit the
kubectl get services leepaio-blog gives me an external IP I go to
that IP (on https) in my browser. It works! Awesome.
Now I can just scale that to, say, 4 instances using:
Very quick/easy. I can see why people enjoy using this.
Those of you that use Kubernetes already will notice this isn’t far off the Hello World tutorial on the Kubernetes website. That’s because, honestly it’s not. A simple website with no storage requirements is as simple as Hello World.
With a system administrator hat on, I think I would like to see better Puppet integration. There’s some, but there’s a few missing pieces. That way users can define their infrastructure in code - Kubernetes is part of that story; but having part of your infrastructure defined in yaml files for Kubernetes and part of it defined in either Puppet or Chef means it’s disjointed and larger organisations are obsessed with that Single Pane of Glass marketing buzz phrase.
It has been a while since I’ve used Node.js for anything serious. To give you an idea of how long go we’re talking… I originally hacked together the Green Man Gaming stock control system in Node.js 0.1.x and to this day it only runs on 0.2.x because of the way 0.4.x changed things way way back and it works so no one dare upgrade it.
So I wanted to get up to date on Node and figured I’d have a quick go at using the ‘parse the deb Packages.gz’ file as a quick thing to try out piping.
This is going to be very crude in places, I’m sure… but here was my quick hack around:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
I know Streams were introduced way back in 0.8.x of Node.js - but… wow that code is far simpler to understand than counterparts in other languages. It also improved debugging - as I was able to debug each part of the pipe.
The only thing that took me a short while was ‘what if the request fails’. It turned out that I had to use the ‘on’ error bit at the point in the pipe where I was adding the gunzip streaming. If I threw up there - then I’d be all good.
Either way - as a post Dota2 TI5 final night bit of experimenting it was certainly worthwhile.
I use boot2docker a lot. By a lot I mean every day. A particular bug in boot2docker on OSX has led to me constantly having to destroy and rebuild my boot2docker VM. So… I’m leaving this here for people to Google/find (including me).
The symptom is:
1 2 3 4 5 6 7
Well that’s annoying. Normally I’d do this:
1 2 3
That has a serious downside. Like losing all my images. So hunting around I found boot2docker(v 1.4.1 and 1.5) bad cert issues on OSX 10.9.3 and a guy called garthk had the answer!
1 2 3 4 5
1 2 3 4 5 6 7 8 9 10 11
This is going to be a bit of a rant - but a rant from something that came up recently where someone was considering MongoDB.
I was just reading MongoDB Set to Become the ‘New Default’ Database…
Just… wow. Quite a bold statement there. To save people giving details on the form (another personal bugbear of mine… so I filled it with junk) - here’s the link to the relevant piece.
First things first let’s pick apart the minor error in the press release that eWeek clearly didn’t check up on.
All tests were performed with 400M records distributed across three servers, which represents a data set larger than RAM.
Our setup consisted of one database server and one client server to ensure the YCSB client was not competing with the database for resources. Both servers were identical.
Load 20M records using the “load” phase of YCSB
So that’d be mistake one… it wasn’t three servers at all. That is a gross error as the read statistics for Cassandra would be way off as a result. In fact they say as such in the Conclusions.
We focused on single server performance in these tests. Multi-server deployments address high availability and scale out for all three databases. We believe that this introduces a different set of considerations, and that the trade offs may be quite different.
My point is that it looks like the creators of MongoDB have commissioned and paid for this report. If they haven’t then really the press release and news around it is tripe and if they have… where’s the notification of bias.
It’s worth adding that the three databases tested are completely different! Cassandra, MongoDB and CouchBase each have very different use cases. It’s not overly fair to pit them off against each other. If you were to pit MongoDB and CouchDB against each other, that would be fairer. CouchBase is really CouchDB but prettier and with a very very clever caching front end on it.
I have deployed a large Cassandra and very large CouchDB set up. I wouldn’t use either one for the other’s workload.
Docker is a hot topic at the moment in the DevOps world. I use it almost every day and want to look at how automation can be achieved in terms of security and monitoring.
Containers in computing aren’t new. In fact FreeBSD had containers before Google was using them in Linux; although it call them jails.
Docker is great in that it’s brought containers to the masses. Once the reserve of people with the patience to set up LXC on Linux or the painful jails on FreeBSD - side note: it’s very painful I might talk about that another time.
We can talk to Docker via it’s RESTful API and libraries exist for almost every language. The two popular obvious ones are Go and Python - I say obvious, but it’s more that I just prefer these two languages. I’m sure the Ruby one is awesome too.
The downside of Docker that’s coming up more and more is managing security of containers. People often just use official images without a second thought and these end up in production. There’s posts containing loads of FUD on the topic which exist already - but in general how do you ensure you keep your containers’ operating system packages up to date?
Sounds like a task for a script. I broke it down into the following tasks:
- Connect to Docker (boot2docker in my case)
- Get a list of installed packages in debian:jessie image
- Get a list of packages from security.debian.org
- Compare the two
I need to add I used Python 3.4 for this. This makes the syntax seem a little odd to a Python 2.x view so needed to say!
Let’s get connecting out of the way:
1 2 3 4 5 6 7 8 9 10
Took me a small while to figure out the issue where OpenSSL 1.0.2a causes problems with quite a few libraries and talking to APIs. To get out of it for now I disable the verify part of requests - It’ll complain a lot about it.
Now we’re connected we can make a container and get some stuff out:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
We now have a stack of packages in a dictionary keyed by package name. To do this we make good use of dpkg-query to get a CSV like list of package,version.
What we want next is a similar dict for up to date packages. Now, I know a lot of people who might read this would launch into apt-get update and then query the global list of packages. Would you do that in production? Really? You just want a list of stuff… Let’s just get it from security.debian.org directly.
1 2 3 4 5 6 7
A small point here… We make use of the gzip library directly to ungzip the file downloaded via Requests. To do this we use ‘r.raw’ like a file which GzipFile can use without any issue.
Now the format of this file is a bit weird. It’s a list of key value pairs for each package with a blank line between packages. The two keys we’re interested in for each package are Package (the name) and Version.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Perfect! We now have a dict with all the security updates in Jessie keyed by the package name again.
With these two dicts we can intersect them and only get elements that are in both. If the version doesn’t match spit it out. I had to fake an update to exist to test this properly as when I tested there were no out of date packages.
1 2 3 4 5 6 7 8 9 10
And, there we have it (spot the whining from requests…):
1 2 3 4 5 6 7 8
Awesome! There we have it - a quick way to grab and compare packages against containers.
It’s about time I updated this site. I go through stints of bothering with it; which is very common I find with a lot of people who still blog.
However, as I’m using Twitter less and less (can’t put my finger on why) and I like to keep my Facebook more private than most… it’s about time I bothered once more.
So, new theme. Went off, got the Octostrap theme. It’s awesome and well worth it.
I did look at Octopress 3 - but I don’t like the way it works. The approach of using rake still works for me and it seems like separating things for the sake of doing so… bit like something Hubot has done over the past year too.
As for the ‘new start’ - I’m going to try and blog more. Adding to that I do have a Tumblr I post random things to as well which may be more up to date.
In fact - ways to find stuff I’m doing are:
And because it does happen…
I probably spend way too much time configuring my VIM setup. It tends to change depending on what I’m working on. So, at the moment the following things matter to me most:
There would be Scala, but I use the excellent IntelliJ IDEA product for that. Nothing can beat it, so there’s no point trying to get VIM to do it.
It matters to me that my editor works cross platform too. Not fussed so much about VIM on Windows (although it’s nice when that works too) but more between OSX and Linux as they are the main two Operating Systems I use.
So I felt I’d do a post about how I manage my VIM config as it may/may not be useful for others.
Let’s start nice and empty:
1 2 3 4 5
Why do this? Well, simply put - this way your .vim folder can be easily stored in Git or another VCS you fancy. Job done!
Right, so what next? vundle all the things.
Now you need a small bit at the top of your .vimrc file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Now we have a basis of a working VIM we can work on. Let’s set up some cool stuff now…
Some obvious bootstrap things
By default, VIM likes to behave a little bit old fashioned. We want some niceties from the off - so let’s do that:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
You’ll notice that 2 spaces is the default but, obviously, Python is a good example of a language that uses 4.
This way you’ll see, we get to customise each language. It’s nice. ‘au’ is short for auto. As in… Automatically run this when the FileType is python.
This is the Batman utility belt. It’s also easy to set up and serves as a good example of how Vundle works.
Job done. Make sure this goes between the Vundle begin and end calls.
Now save that and we’ll online reload/install:
This will load up vim, install all the things and then exit when done.