This falls under the #protip category of things…

If you’re using the Docker Beta for Windows (the Hyper-V one) and you find that you can’t do any docker pull - try disabling Malicious Website Protection in your Anti-virus/Anti-Malware suite. Yep - turns out these things which filter traffic might actually cause a problem for you!

People kept thinking it is to do with IPv6 on the forums - it’s not.

It’s now just annoying that I have to keep turning it on after doing dev. work.

Now that I have the blog up on Google Container Engine I realise it’s very basic. That’s fine - but really now we followed the tutorial style way of getting it online using nginx (with the TLS offloaded there) we should really migrate it to a proper Load Balancer where Google can offload the TLS and we can take proper advantage of their Global Load Balancer like platform.

I wanted to do this without stopping anything - just like I would with something in production. So here goes…

Secrets

First and foremost we have to upload our Certificate/Key combo to Kubernetes via the create secret command.

1
$ kubectl create secret generic tls-cloudflare --from-file=tls.crt --from-file=tls.key

Of note and really important is that the files must be called tls.crt and tls.key! I found this out the hard way - it is documented but not very clearly.

Open up port 80 on the nodes

To do this I needed to edit the service. This was the bit I got confused about initially as the ‘service’ also describes the LoadBalancer. Turns out this is fine and we’ll later come back and make it not be an external LoadBalancer. For now:

1
2
3
4
5
6
7
8
9
10
11
ports:
  - name: blog-tls
    nodePort: 31879
    port: 443
    protocol: TCP
    targetPort: 443
  - name: blog-plain
    nodePort: 31878
    port: 80
    protocol: TCP
    targetPort: 80

To set up multiple ports you have to name them. The reason we had port 443 was because initially we set up the blog with the SSL to be terminated on the containers. This isn’t very efficient but we need it to remain while we work.

We don’t change the type at this point (i.e. it stays as a LoadBalancer) so that we keep things up and running on the current IP.

Once we’ve done this we need to create a firewall rule to allow Ingress services, as they are known, to reach the nodes. This is done using the gcloud tool:

1
$ gcloud compute firewall-rules create allow-130-211-0-0-22-31878 --source-ranges 130.211.0.0/22 --allow tcp:31878

The 130.211.0.0/22 range is the range given to Load Balancers in Google Cloud. It’s the source range where all things will arrive from.

Once we have this we can set up our Ingress service! We have to use a .yaml file for this:

1
2
3
4
5
6
7
8
9
10
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: leepaio-blog
spec:
  tls:
  - secretName: tls-cloudflare
  backend:
    serviceName: leepaio-blog
    servicePort: 80

You’ll notice it’s not very complicated. We merely reference the secret we already created as well as the service we want to reach. To apply this we use the apply command:

1
$ kubectl apply -f ingress.yaml

Easy! If you have everything right…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Name:                   leepaio-blog
Namespace:              default
Address:                x.x.x.x
Default backend:        leepio-blog:80 (10.0.0.7:80,10.0.0.8:80,10.0.1.5:80 + 1 more...)
TLS:
  tls-cloudflare terminates
Rules:
  Host  Path    Backends
  ----  ----    --------
Annotations:
  backends:                     {"k8s-be-31878":"HEALTHY"}
  forwarding-rule:              k8s-fw-default-leepaio-blog
  https-forwarding-rule:        k8s-fws-default-leepaio-blog
  https-target-proxy:           k8s-tps-default-leepaio-blog
  static-ip:                    k8s-fw-default-leepaio-blog
  target-proxy:                 k8s-tp-default-leepaio-blog
  url-map:                      k8s-um-default-leepaio-blog

Now… Things to note. If you don’t see the https-forwarding-rule line - then your certificate was unable to load! It will still list the TLS section, which is a bit irritating.

So now I can verify that I can reach my blog via the IP I’ve got from the ingress service and that works just fine.

So, a quick update to Cloudflare and we’re off.

Small Nginx Tweak

I use the Rules Engine in Cloudflare to redirect from non-TLS to TLS. But if you can’t:

1
2
3
4
# Redirect http to https if the proto header added it http
        if ($http_x_forwarded_proto = "http") {
            return 301 https://$host$request_uri;
        }

You’ll be pleased to hear that X-Forwarded-Proto is set by Google and you can use it!

Remove the old IP

Obviously I still need the service but I now get to remove the public IP - that’s as easy as changin the Type of the Service.

1
2
3
4
5
6
7
8
9
10
11
12
ports:
  - name: blog-tls
    nodePort: 31879
    port: 443
    protocol: TCP
    targetPort: 443
  - name: blog-plain
    nodePort: 31878
    port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort <-- this was LoadBalancer

If you want you can also remove the blog-tls entries but I didn’t get round to that yet.

Once again this applied using the kubectl apply command. Once this done a quick descripe service will show that there are no public IPs up anymore.

Conclusions

That was fun! It was interesting doing this all from Windows (still not supported, technically) and getting it all working wasn’t that hard. There’s a few issues with the kubectl ‘edit’ commands but nothing that couldn’t be worked around.

The thing I come up against a lot is that everything is really well documented; however the guides seem all over the place. The documentation for the yaml files is ‘written by an engineer for an engineer’ and hopefully good breakdowns will start to exist more and more as people use it!

So - why not?

I recently replaced my Macbook Air with a shiny Surface Pro 4. There’s lots of downsides of doing such a thing - but also lots of ups.

The form factor is awesome. Touch Screens are great and properly made displays are great. I think Apple have some catching up to do. But…

Suddenly I can’t do some things I’m used to doing. I need to fix that… part 1 is being able to post a blog entry from Windows. Now this sounds like it should be really simple. It’s not.

My blog runs on an older version of Octopress - lots of reasons for this and I don’t have the time to spend changing to something else. I actually like the workflow and I shouldn’t have to change it just because I changed OS.

To do this I needed the Windows Insider Preview. Worth noting you don’t need that to do the bit where I deploy the change to Google Cloud.

Bash for Windows and Octopress

This turned out to be a bit of a pain. Bash for Windows sets its umask to 0777 - thanks Windows! Bundler does not appreciate this. Also Bash for Windows is an Ubuntu 14.04 run time - and I’ll be needing Ruby versions.

1
2
3
4
5
6
7
8
9
10
$ sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
$ cd
$ git clone https://github.com/rbenv/rbenv.git ~/.rbenv
$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
$ echo 'eval "$(rbenv init -)"' >> ~/.bashrc
$ exec $SHELL
$ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
$ echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
$ exec $SHELL
$ rbenv install 2.2.1

With that sorted I can now do the harder bit:

1
2
3
$ umask u=rwx,g=rx,o=rx  # This changes it for this session only
$ cd /mnt/c/Users/leepa/Documents/octopress
$ bundle install --path=~/octopress/vendor/gems

Wait - why am I specifying the –path here? Okay, first, it’s just sensible to keep the deps somewhere else other than the main gem population. Secondly and this bit is more important The /mnt/c/ is all 0777 perms and bundler will correctly not appreciate that.

Once this is done I can go ahead and do my normal post creation.

Right, with that done I need to deploy this!

Docker Public Beta and Google Cloud SDK

This is a match made in heaven. First of all I enabled Hyper-V and installed the latest Docker Beta for Windows (which uses Hyper-V instead of VirtualBox). I then go ahead and install the Google SDK and the kubectl command using the commands provided in my previous post.

Snag: You need to have HOME set. Set it to your /Users/foo folder where foo is your username. It’s the default directory when you open a new command prompt so that shouldn’t be too difficult.

If you don’t set it the gcloud get credentials command will tell you that it needs to be set.

Now with all the done:

Bash:

1
$ bundle exec rake generate

Windows:

1
2
3
4
$ docker build gcr.io/PROJECT_ID/leepaio-blog:v3 .
$ gcloud docker push gcr.io/PROJECT_ID/leepaio-blog:v3
$ kubectl edit deployment
... edit the relevant container line, save and exit

That’s it! Wrote a blog post on a blog generator that doesn’t work on Windows using the Container Engine from Google that doesn’t support Windows - all without leaving Windows.

Mission accomplished!

So for fun I decided to move the blog off of Heroku. Wait what? Why? It’s mainly because I find it’s important to learn things and a great way of doing that is to actually do something.

Docker

I’ve been using Docker in work settings for a while now - in fact recently deployed a sizable infrastructure. It made sense to try out Google Cloud Containers; massively overkill for this kind of blog. In fact I ended up calling the project ‘overkill’ in Google Cloud Console.

To do any of this - I needed to containerise the blog. Heroku does this transparently so I obviously needed to Dockefile it up - this was the initial version:

1
2
FROM nginx:stable-alpine
COPY public /usr/share/nginx/html

With a quick docker run of this I had a blog running on localhost. Well that was trivial.

TLS

I use CloudFlare to front my blog. This is because I’m lazy but also because DNSSEC and the like being managed by them makes like a lot easier. They recently added a service called ‘Origin Certificates’ where they give you a certificate for free (but not on a public CA) that means you get end to end TLS. TLS is good… even for a for blog so we want to use that.

To make use of it, I create folder called _docker and put the PEM/key files in it and create my nginx.conf in that folder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
user nginx;
worker_processes 1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
    }

    server {
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        ssl_certificate /etc/nginx/cert.pem;
        ssl_certificate_key /etc/nginx/cert.key;
        ssl_session_timeout 1d;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        ssl_prefer_server_ciphers on;

        add_header Strict-Transport-Security max-age=15768000;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
    }
}

With this I can then modify my Dockerfile:

1
2
3
4
5
FROM nginx:stable-alpine
COPY public /usr/share/nginx/html
COPY _docker/nginx.conf /etc/nginx/nginx.conf
COPY _docker/cert.pem /etc/nginx/cert.pem
COPY _docker/cert.key /etc/nginx/cert.key

Google Cloud

So I make my overkill project in Google Cloud Console and then download gcloud SDK to my computer from the Google Cloud SDK Downloads page.

A note to the wise… do not use the apt-get version if you want to use the gcloud component installer. The curl approach while insecure (for lots of reasons) doesn’t ask for root so safer than that Homebrew onliner you’re used to on OSX.

Before I started on my computer, I went to the console and created a cluster of 2 VMs that were of the ‘small’ instance type - I don’t need dedicated resources for this.

Once all done I needed the kubectl command. That’s easy:

1
$ gcloud components install kubectl

Now I have all the bits I need I can go ahead and build/push my container:

1
2
$ docker build gcr.io/PROJECT_ID/leepaio-blog:v1 .
$ gcloud docker push gcr.io/PROJECT_ID/leepaio-blog:v1

I had called my container cluster ‘overkill’ so next step was to get the credentials and run it:

1
2
3
4
5
6
7
8
$ gcloud container clusters get-credentials overkill
$ kubectl run leepaio-blog --image=gcr.io/PROJECT_ID/leepaio-blog:v1 --port=443
$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
leepaio-blog  1         1         1            1           1m
$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
leepaio-blog-2719256167-7z1cq  1/1       Running   0          1m

Woah - that’s easy. I guess I want to see it from the outside world so let’s do that:

1
$ kubectl expose deployment leepaio-blog --type="LoadBalancer"

After a bit the kubectl get services leepaio-blog gives me an external IP I go to that IP (on https) in my browser. It works! Awesome.

Now I can just scale that to, say, 4 instances using:

1
$ kubectl scale deployment leepaio-blog --replicas=4

Very quick/easy. I can see why people enjoy using this.

Closing thoughts

Those of you that use Kubernetes already will notice this isn’t far off the Hello World tutorial on the Kubernetes website. That’s because, honestly it’s not. A simple website with no storage requirements is as simple as Hello World.

With a system administrator hat on, I think I would like to see better Puppet integration. There’s some, but there’s a few missing pieces. That way users can define their infrastructure in code - Kubernetes is part of that story; but having part of your infrastructure defined in yaml files for Kubernetes and part of it defined in either Puppet or Chef means it’s disjointed and larger organisations are obsessed with that Single Pane of Glass marketing buzz phrase.

It has been a while since I’ve used Node.js for anything serious. To give you an idea of how long go we’re talking… I originally hacked together the Green Man Gaming stock control system in Node.js 0.1.x and to this day it only runs on 0.2.x because of the way 0.4.x changed things way way back and it works so no one dare upgrade it.

Since then, the Node.js ecosystem has matured and although people make fun of it… Node.js is heavily used on things at massive scale. I’d also like to think I’ve learnt to write better Javascript since working on TweetDeck… but let’s not get ahead of ourselves!

So I wanted to get up to date on Node and figured I’d have a quick go at using the ‘parse the deb Packages.gz’ file as a quick thing to try out piping.

This is going to be very crude in places, I’m sure… but here was my quick hack around:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// There be lots of dragons...
var zlib = require('zlib');

var R = require('request'),
    _ = require('lodash'),
    through2 = require('through2'),
    es = require('event-stream');

var url = 'http://security.debian.org/dists/jessie/updates/main/binary-amd64/Packages.gz';

var all = [];

R.get(url)
    .pipe(zlib.createGunzip())
    .on('error', function() {
        console.log("it's all broke yo");
    })
    // Split by each 'object' in the packages file
    .pipe(es.split(/\n\n/))
    .pipe(through2.obj(function(chunk, enc, callback) {
        // We'll get an empty chunk at the end
        if (chunk === "") {
            callback();
            return;
        }
        // Create kv Pairs from each line
        var kvPairs = _.map(chunk.split('\n'), function(line) {
            return line.split(': ');
        });
        // Lower case the attributes - cos that's better
        var fixedAttribs = _.map(kvPairs, function(obj) {
            return [obj[0].toLowerCase(), obj[1]]
        });

        this.push(_.zipObject(fixedAttribs));
        callback();
    }))
    .on('data', function(data) {
        all.push(data)
    })
    .on('end', function () {
        console.log(JSON.stringify(all));
    });

I know Streams were introduced way back in 0.8.x of Node.js - but… wow that code is far simpler to understand than counterparts in other languages. It also improved debugging - as I was able to debug each part of the pipe.

The only thing that took me a short while was ‘what if the request fails’. It turned out that I had to use the ‘on’ error bit at the point in the pipe where I was adding the gunzip streaming. If I threw up there - then I’d be all good.

Either way - as a post Dota2 TI5 final night bit of experimenting it was certainly worthwhile.

I use boot2docker a lot. By a lot I mean every day. A particular bug in boot2docker on OSX has led to me constantly having to destroy and rebuild my boot2docker VM. So… I’m leaving this here for people to Google/find (including me).

The symptom is:

1
2
3
4
5
6
7
➜  Code  docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): darwin/amd64
An error occurred trying to connect: Get https://192.168.59.103:2376/v1.19/version: x509: certificate is valid for 127.0.0.1, 10.0.2.15, not 192.168.59.103

Well that’s annoying. Normally I’d do this:

1
2
3
$ boot2docker halt
$ boot2docker destroy
$ boot2docker up

That has a serious downside. Like losing all my images. So hunting around I found boot2docker(v 1.4.1 and 1.5) bad cert issues on OSX 10.9.3 and a guy called garthk had the answer!

1
2
3
4
5
$ boot2docker ssh
$ sudo curl -o /var/lib/boot2docker/profile https://gist.githubusercontent.com/garthk/d5a17007c277aa5c76de/raw/3d09c77aae38b4f2809d504784965f5a16f2de4c/profile
$ sudo halt
$ # Now load up VirtualBox and manually power off the VM
$ boot2docker up

Job done!

1
2
3
4
5
6
7
8
9
10
11
$ ➜  Code  docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): darwin/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64

This is going to be a bit of a rant - but a rant from something that came up recently where someone was considering MongoDB.

I was just reading MongoDB Set to Become the ‘New Default’ Database

Just… wow. Quite a bold statement there. To save people giving details on the form (another personal bugbear of mine… so I filled it with junk) - here’s the link to the relevant piece.

HIGH PERFORMANCE BENCHMARKING: MongoDB and NoSQL Systems

First things first let’s pick apart the minor error in the press release that eWeek clearly didn’t check up on.

All tests were performed with 400M records distributed across three servers, which represents a data set larger than RAM.

Ok…

Our setup consisted of one database server and one client server to ensure the YCSB client was not competing with the database for resources. Both servers were identical.

And…

Load 20M records using the “load” phase of YCSB

So that’d be mistake one… it wasn’t three servers at all. That is a gross error as the read statistics for Cassandra would be way off as a result. In fact they say as such in the Conclusions.

We focused on single server performance in these tests. Multi-server deployments address high availability and scale out for all three databases. We believe that this introduces a different set of considerations, and that the trade offs may be quite different.

My point is that it looks like the creators of MongoDB have commissioned and paid for this report. If they haven’t then really the press release and news around it is tripe and if they have… where’s the notification of bias.

It’s worth adding that the three databases tested are completely different! Cassandra, MongoDB and CouchBase each have very different use cases. It’s not overly fair to pit them off against each other. If you were to pit MongoDB and CouchDB against each other, that would be fairer. CouchBase is really CouchDB but prettier and with a very very clever caching front end on it.

I have deployed a large Cassandra and very large CouchDB set up. I wouldn’t use either one for the other’s workload.

Rant over…

Docker is a hot topic at the moment in the DevOps world. I use it almost every day and want to look at how automation can be achieved in terms of security and monitoring.

Containers in computing aren’t new. In fact FreeBSD had containers before Google was using them in Linux; although it call them jails.

Docker is great in that it’s brought containers to the masses. Once the reserve of people with the patience to set up LXC on Linux or the painful jails on FreeBSD - side note: it’s very painful I might talk about that another time.

We can talk to Docker via it’s RESTful API and libraries exist for almost every language. The two popular obvious ones are Go and Python - I say obvious, but it’s more that I just prefer these two languages. I’m sure the Ruby one is awesome too.

The downside of Docker that’s coming up more and more is managing security of containers. People often just use official images without a second thought and these end up in production. There’s posts containing loads of FUD on the topic which exist already - but in general how do you ensure you keep your containers’ operating system packages up to date?

Sounds like a task for a script. I broke it down into the following tasks:

  • Connect to Docker (boot2docker in my case)
  • Get a list of installed packages in debian:jessie image
  • Get a list of packages from security.debian.org
  • Compare the two

I need to add I used Python 3.4 for this. This makes the syntax seem a little odd to a Python 2.x view so needed to say!

Let’s get connecting out of the way:

1
2
3
4
5
6
7
8
9
10
from docker.client import Client
from docker.utils import kwargs_from_env

# So we can use boot2docker
kwargs = kwargs_from_env()
kwargs['tls'].assert_hostname = False
kwargs['tls'].verify = False  # Workaround https://github.com/docker/docker-py/issues/465

# Set up the client
client = Client(**kwargs)

Took me a small while to figure out the issue where OpenSSL 1.0.2a causes problems with quite a few libraries and talking to APIs. To get out of it for now I disable the verify part of requests - It’ll complain a lot about it.

Now we’re connected we can make a container and get some stuff out:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Create my Debian Jessie container
container = client.create_container(
    image='debian:jessie',
    stdin_open=True,
    tty=True,
    command="/usr/bin/dpkg-query -Wf '${Package},${Version}\n'",
)

# Launch it with the custom command
client.start(container)

# Grab the dpkg-query output
output = client.logs(container).decode("utf-8")

packages = {}

lines = output.split('\n')
lines.pop(0)
for line in lines:
    # Last line is a blank
    if not line:
        continue

    k, v = line.split(',')
    packages[k] = v

We now have a stack of packages in a dictionary keyed by package name. To do this we make good use of dpkg-query to get a CSV like list of package,version.

What we want next is a similar dict for up to date packages. Now, I know a lot of people who might read this would launch into apt-get update and then query the global list of packages. Would you do that in production? Really? You just want a list of stuff… Let’s just get it from security.debian.org directly.

1
2
3
4
5
6
7
import gzip
import re

import requests

r = requests.get('http://security.debian.org/dists/jessie/updates/main/binary-amd64/Packages.gz', stream=True)
gz = gzip.GzipFile(fileobj=r.raw)

A small point here… We make use of the gzip library directly to ungzip the file downloaded via Requests. To do this we use ‘r.raw’ like a file which GzipFile can use without any issue.

Now the format of this file is a bit weird. It’s a list of key value pairs for each package with a blank line between packages. The two keys we’re interested in for each package are Package (the name) and Version.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kvregex = re.compile(r'(\w+): (.*)')
security_updates = {}

current_package = None
current_version = None

# Build up a security updates dict
for line in gz.readlines():
    m = kvregex.match(line.decode("utf-8"))
    if not m:
        security_updates[current_package] = current_version
        continue

    g = m.groups()
    if g[0] == 'Package':
        current_package = g[1]
    elif g[0] == 'Version':
        current_version = g[1]

r.close()

Perfect! We now have a dict with all the security updates in Jessie keyed by the package name again.

With these two dicts we can intersect them and only get elements that are in both. If the version doesn’t match spit it out. I had to fake an update to exist to test this properly as when I tested there were no out of date packages.

1
2
3
4
5
6
7
8
9
10
def common_entries(*dcts):
    for i in set(dcts[0]).intersection(*dcts[1:]):
        yield (i,) + tuple(d[i] for d in dcts)

# Fake a security update for Sed... (cos Jessie 8.1 is quite up to date)
security_updates['sed'] = '4.2.2-4+b2'

for entry in common_entries(packages, security_updates):
    if entry[1] != entry[2]:
        print('%s is %s and a security update exists for %s' % entry)

And, there we have it (spot the whining from requests…):

1
2
3
4
5
6
7
8
$ python foobar.py
.../venv/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
.../venv/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
.../venv/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
sed is 4.2.2-4+b1 and a security update exists for 4.2.2-4+b2

Awesome! There we have it - a quick way to grab and compare packages against containers.

It’s about time I updated this site. I go through stints of bothering with it; which is very common I find with a lot of people who still blog.

However, as I’m using Twitter less and less (can’t put my finger on why) and I like to keep my Facebook more private than most… it’s about time I bothered once more.

So, new theme. Went off, got the Octostrap theme. It’s awesome and well worth it.

I did look at Octopress 3 - but I don’t like the way it works. The approach of using rake still works for me and it seems like separating things for the sake of doing so… bit like something Hubot has done over the past year too.

As for the ‘new start’ - I’m going to try and blog more. Adding to that I do have a Tumblr I post random things to as well which may be more up to date.

In fact - ways to find stuff I’m doing are:

And because it does happen…

I probably spend way too much time configuring my VIM setup. It tends to change depending on what I’m working on. So, at the moment the following things matter to me most:

There would be Scala, but I use the excellent IntelliJ IDEA product for that. Nothing can beat it, so there’s no point trying to get VIM to do it.

It matters to me that my editor works cross platform too. Not fussed so much about VIM on Windows (although it’s nice when that works too) but more between OSX and Linux as they are the main two Operating Systems I use.

So I felt I’d do a post about how I manage my VIM config as it may/may not be useful for others.

Setup

Let’s start nice and empty:

1
2
3
4
5
mv ~/.vim ~/.vim.old
mv ~/.vimrc ~/.vimrcold
mkdir ~/.vim
touch ~/.vim/myvimrc
ln -s ~/.vim/myvimrc ~/.vimrc

Why do this? Well, simply put - this way your .vim folder can be easily stored in Git or another VCS you fancy. Job done!

Right, so what next? vundle all the things.

1
2
mkdir ~/.vim/bundle
git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/vundle

Now you need a small bit at the top of your .vimrc file.

1
vi ~/.vimrc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
set nocompatible              " be iMproved, required
filetype off                  " required

" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()"

" let Vundle manage Vundle, required
Plugin 'gmarik/Vundle.vim'

" Your stuff is going go here...

" All of your Plugins must be added before the following line
call vundle#end()            " required
filetype plugin indent on    " required

Now we have a basis of a working VIM we can work on. Let’s set up some cool stuff now…

Some obvious bootstrap things

By default, VIM likes to behave a little bit old fashioned. We want some niceties from the off - so let’s do that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
set expandtab     " Soft tabs all the things
set tabstop=2     " 2 spaces is used almost everywhere now
set shiftwidth=2  " When using >> then use 2 spaces
set autoindent    " Well, obviously
set smartindent   " As opposed to dumb indent

set noautowrite
set number
set autoread      " Read changes from underlying file if it changes
set showmode      " Showing current mode is helpful++
set showcmd
set nocompatible  " Actually make this vim
set ttyfast       " We don't use 33.6 modems these days
set ruler

set incsearch     " Use incremental search like OMG yes
set ignorecase    " Ignore case when searching
set hlsearch      " Highlight searching
set showmatch     " Show me where things match
set diffopt=filler,iwhite "Nice diff options
set showbreak=" Cooler linebreak
set noswapfile    " It's 2014, GO AWAY FFS

set esckeys       " Allow escape key in insert mode
set cursorline    " Highlight the line we're on
set encoding=utf8 " Really, people still use ASCII

You’ll notice that 2 spaces is the default but, obviously, Python is a good example of a language that uses 4.

1
au FileType python setlocal tabstop=8 expandtab shiftwidth=4 softtabstop=4

This way you’ll see, we get to customise each language. It’s nice. ‘au’ is short for auto. As in… Automatically run this when the FileType is python.

Syntastic

This is the Batman utility belt. It’s also easy to set up and serves as a good example of how Vundle works.

1
Bundle 'scrooloose/syntastic'

Job done. Make sure this goes between the Vundle begin and end calls.

Now save that and we’ll online reload/install:

1
vim -c "execute \"BundleInstall\" | q | q"

This will load up vim, install all the things and then exit when done.