< Return to Blog

Upgraded Docker platform, this time with more bells!

Before the DNS switch, I last posted

The DNS has been switched once more, and while they propagate globally, traffic will be routed to a pair of droplets, rather than this single instance.

Hang tight, and I'll share all the gory details...

During my initial efforts, since I wanted to adhere to the KISS principle (Keep It Simple Stupid), I decided to setup MySQL — via Ansible — onto the bare metal directly. This meant, there was going to be a single app container, per app that would run.

This is something I knew I'd have to deal with a while back, but I was hoping that I wouldn't need to destroy and re-create droplets, but then again, that isn't such a bad thing. Similar to git branches, spawning a new droplet is cheap. Trouble is, I can't take down the droplet with my data sitting on it (and assets too!).

Therefore, it was rather obvious, that it was time to isolate DB hosts from the apps themselves.

Development: Separation

I started by getting rid of all socket-based connectivity to MySQL and this ended up being a neat learning exercise. I ended up hitting a brick wall, with my migration rake task perpetually complaining that it couldn't find a socket.

When passing DATABASE_URL one must take care not to pass localhost, but rather, 127.0.0.1 — the loopback address. For whatever reason, localhost seems to trick ActiveRecord (or precisely, the mysql2 gem?) to expect a socket-based connection rather than connecting over TCP. Getting to this point involved checking that env vars were correctly being set by the time my container ran its ENTRYPOINT script, so as you can imagine, at least and hour of two of yak-shaving.

Since I already had the existing 'production' instance running, I forked off (in git) and began setting up a play for the DB container, and ended up with a nicer naming scheme as well — db-droplet-sgp-1 since it's hosted in Singapore.

And to test this out, I altered by Vagrant config to spawn two instances, the app server that I had been hacking on for a while in development, and the second instance for fresh bootstrapping of the MySQL DB container.

From the outset, I had crafted my Ansible setup with a metal.yml playbook, which applies itself to all hosts and ensures all the 'basic' hardening and configs are in place. As part of the metal build, I also tied it with a base-docker image, one that I use as a basis for all the app containers. What's nice about this is that, all nodes will therefore have this base image, even if I don't really end up using it.

Sure, it's a whole lot of work to simply by-pass using a Docker registry, but it's going to be worth it. I may simplify some aspects later once I've setup my own private registry as well.

With the MySQL DB VM strapped, I set about getting the container running, which was pretty quick, and then I adapted the Docker systemd service I'd been using for my app container. Couple quick tests and I was happy that the MySQL container was running correctly, with sufficent volumes exposed to the host.

Development: Getting the app to connect to the DB host

Prior to separation, when moving away from the sockets based connectivity, I had trouble getting my app to connect to the MySQL container. I had the app attempt to access the loopback address via DATABASE_URL failing to realise, the loopback address inside a Docker container, is quite simply the container itself! I had ignored the docker0 inet interface, and should have used the internal IP 172.17.42.1, which is simply Docker parlance for 'connect to the host'.

I then ran into two hurdles, (i) related to MySQL 5.7.7 and the ansible mysql_user module, which as of Ansible version 1.9.1, does not account for changes made to the mysql.user table, where the password column has since been removed — I therefore rolled back to 5.6.24, and can upgrade this in the future.

The second (ii) hurdle, is related to the MySQL image pulled off the Docker hub — it failed to set WITH GRANT OPTION on the root user by default, even though it clearly should do so. I repeatedly deleted /var/lib/mysql before re-spawning the container, to trigger this script, and yet the grant option was missing on root. I ended up manipulating the grant table directly, which is pretty much a hack

- name: Start mysql service
  service: name={{ mysql_service_name }} enabled=yes state=started

- name: Grant all to root MySQL user
  mysql_user: name={{ mysql_root_username }} password={{ mysql_root_password }} priv=*.*:ALL host='%' state=present login_host=172.17.42.1 login_port={{mysql_port}} login_user={{ mysql_root_username }} login_password={{ mysql_root_password }}

- name: Force GRANT OPTION for root MySQL user
  shell: mysql -u{{ mysql_root_username }} -p{{ mysql_root_password }} -h172.17.42.1 -P{{ mysql_port }} -e "UPDATE mysql.user SET Grant_priv = 'Y' WHERE user = 'root' AND host = '%';"

- name: FLUSH PRIVILEGES for MySQL
  shell: mysql -u{{ mysql_root_username }} -p{{ mysql_root_password }} -h172.17.42.1 -P{{ mysql_port }} -e "FLUSH PRIVILEGES;"

I actually had less trouble getting this to work in development, and couple of these issues surfaced when deploying to freshly spun droplets.

If you are reading this, then it means DNS has propagated and you are now being served this text which has been fetched across Digital Ocean's private-network (in sgp1). The nice thing is there are not bandwidth costs for private-network use.

Nginx Reverse Proxy vs. AWS Routes 53

At the moment, I'm relying on old-school DNS, which pretty much sucks. My next step is to deploy a simple nginx reverse-proxy, which would be a cheaper option compared to implement Routes 53. I'm aiming to keep my overhead cost to a minimum, and $5/mo for a reverse proxy isn't too bad.

Background jobs with Resque via Redis

Resque has also been incorporated, and in this example it's running on the same host and linked with the app container — of course, this need not always be the case and can be moved to a separate node if needed.

CONTAINER ID        IMAGE                             COMMAND                CREATED             STATUS              PORTS                    NAMES
d1787ff45a77        app/mwdesilva-production:latest   "/bin/sh -c /home/mw   43 seconds ago      Up 42 seconds       0.0.0.0:80->80/tcp       app-mwdesilva-production
958bb9bc9377        redis:3.0                         "/entrypoint.sh redi   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp   redis-mwdesilva-production

References

  1. MySQL changes since 5.6 that bit me