Now that I have the blog up on Google Container Engine I realise it’s very basic. That’s fine - but really now we followed the tutorial style way of getting it online using nginx (with the TLS offloaded there) we should really migrate it to a proper Load Balancer where Google can offload the TLS and we can take proper advantage of their Global Load Balancer like platform.

I wanted to do this without stopping anything - just like I would with something in production. So here goes…

Secrets

First and foremost we have to upload our Certificate/Key combo to Kubernetes via the create secret command.

1
$ kubectl create secret generic tls-cloudflare --from-file=tls.crt --from-file=tls.key

Of note and really important is that the files must be called tls.crt and tls.key! I found this out the hard way - it is documented but not very clearly.

Open up port 80 on the nodes

To do this I needed to edit the service. This was the bit I got confused about initially as the ‘service’ also describes the LoadBalancer. Turns out this is fine and we’ll later come back and make it not be an external LoadBalancer. For now:

1
2
3
4
5
6
7
8
9
10
11
ports:
  - name: blog-tls
    nodePort: 31879
    port: 443
    protocol: TCP
    targetPort: 443
  - name: blog-plain
    nodePort: 31878
    port: 80
    protocol: TCP
    targetPort: 80

To set up multiple ports you have to name them. The reason we had port 443 was because initially we set up the blog with the SSL to be terminated on the containers. This isn’t very efficient but we need it to remain while we work.

We don’t change the type at this point (i.e. it stays as a LoadBalancer) so that we keep things up and running on the current IP.

Once we’ve done this we need to create a firewall rule to allow Ingress services, as they are known, to reach the nodes. This is done using the gcloud tool:

1
$ gcloud compute firewall-rules create allow-130-211-0-0-22-31878 --source-ranges 130.211.0.0/22 --allow tcp:31878

The 130.211.0.0/22 range is the range given to Load Balancers in Google Cloud. It’s the source range where all things will arrive from.

Once we have this we can set up our Ingress service! We have to use a .yaml file for this:

1
2
3
4
5
6
7
8
9
10
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: leepaio-blog
spec:
  tls:
  - secretName: tls-cloudflare
  backend:
    serviceName: leepaio-blog
    servicePort: 80

You’ll notice it’s not very complicated. We merely reference the secret we already created as well as the service we want to reach. To apply this we use the apply command:

1
$ kubectl apply -f ingress.yaml

Easy! If you have everything right…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Name:                   leepaio-blog
Namespace:              default
Address:                x.x.x.x
Default backend:        leepio-blog:80 (10.0.0.7:80,10.0.0.8:80,10.0.1.5:80 + 1 more...)
TLS:
  tls-cloudflare terminates
Rules:
  Host  Path    Backends
  ----  ----    --------
Annotations:
  backends:                     {"k8s-be-31878":"HEALTHY"}
  forwarding-rule:              k8s-fw-default-leepaio-blog
  https-forwarding-rule:        k8s-fws-default-leepaio-blog
  https-target-proxy:           k8s-tps-default-leepaio-blog
  static-ip:                    k8s-fw-default-leepaio-blog
  target-proxy:                 k8s-tp-default-leepaio-blog
  url-map:                      k8s-um-default-leepaio-blog

Now… Things to note. If you don’t see the https-forwarding-rule line - then your certificate was unable to load! It will still list the TLS section, which is a bit irritating.

So now I can verify that I can reach my blog via the IP I’ve got from the ingress service and that works just fine.

So, a quick update to Cloudflare and we’re off.

Small Nginx Tweak

I use the Rules Engine in Cloudflare to redirect from non-TLS to TLS. But if you can’t:

1
2
3
4
# Redirect http to https if the proto header added it http
        if ($http_x_forwarded_proto = "http") {
            return 301 https://$host$request_uri;
        }

You’ll be pleased to hear that X-Forwarded-Proto is set by Google and you can use it!

Remove the old IP

Obviously I still need the service but I now get to remove the public IP - that’s as easy as changin the Type of the Service.

1
2
3
4
5
6
7
8
9
10
11
12
ports:
  - name: blog-tls
    nodePort: 31879
    port: 443
    protocol: TCP
    targetPort: 443
  - name: blog-plain
    nodePort: 31878
    port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort <-- this was LoadBalancer

If you want you can also remove the blog-tls entries but I didn’t get round to that yet.

Once again this applied using the kubectl apply command. Once this done a quick descripe service will show that there are no public IPs up anymore.

Conclusions

That was fun! It was interesting doing this all from Windows (still not supported, technically) and getting it all working wasn’t that hard. There’s a few issues with the kubectl ‘edit’ commands but nothing that couldn’t be worked around.

The thing I come up against a lot is that everything is really well documented; however the guides seem all over the place. The documentation for the yaml files is ‘written by an engineer for an engineer’ and hopefully good breakdowns will start to exist more and more as people use it!