< Return to Blog

Simple Scalable Infrastructure with DigitalOcean, Terraform, and Docker

While my primary system is a Mac, I like to keep my development workflows as linux dependent as possible. Therefore, rather than relying on tools such as boot2docker, I setup a custom image of Ubuntu 14.04 with various tools installed along with Docker 1.3 using Packer

My build template can be found on Github. My workflow from this point onwards is to simply start my customised version of Ubuntu via vagrant up and ssh vagrant@10.33.33.33.

Docker Build

Last weekend, I focused on creating a docker build for Rails, Nginx, and Unicorn with a MySQL data store. My aim at the time was to build a proof of concept, so I broke this down into several sessions of hacking.

The first successful iteration comprised of a couple scripts, which contained hard-links to locations on my system. Running my scripts would launch the containers within my development linux host. I was also able to verify this by cloning repos to my laptop and running the build there.

On Saturday, the 1st of November 2014, I set to decouple the build, so that it could be easily deployed on a remote instance. Ultimately, I was able to reduce the deployment aspect to two separate scripts, (i) the app deploy and (ii) load-balancer deploy,

docker-rails-deploy -> % REBUILD=true DEPLOY=true ./deploy.sh
# REBUILD: rebuild base images
# DEPLOY: run `git pull` within the cloned app repo.

docker-rails-deploy -> % REBUILD=true ./deploy-load-balancer.sh
# REBUILD: rebuild base images

Much of my efforts went into making these scripts as simple to use — they will automatically stop and remove containers, before restarting them. They will also cleanup stale images from 'rebuilds'.

This build is available via Github.

Orchestrated Provisioning with Terraform

However, I needed to address the orchestration aspect and decided to get started with Terraform. Having found 'How To Use Terraform with DigitalOcean' getting up to speed was a breeze, leading to a separate repo 'terraform-orchestrate'. Each 'plan' consists of,

# provider.tf
variable "do_token" {}
variable "pub_key" {}
variable "pvt_key" {}
variable "ssh_fingerprint" {}

provider "digitalocean" {
  token = "${var.do_token}"
}

The variable above are set as env variables and are set when running terraform.

# terraform.tf
resource "digitalocean_droplet" "inertialbox-balancer-1" {
  image = "ubuntu-14-04-x64"
  name = "inertialbox-balancer-1"
  region = "sgp1"
  size = "2gb"
  private_networking = true
  ssh_keys = [
    "${var.ssh_fingerprint}"
  ]

  connection {
    user = "root"
    type = "ssh"
    key_file = "${var.pvt_key}"
    timeout = "2m"
  }

  provisioner "remote-exec" {
    inline = [
      # ...host setup.
    ]
  }
}

Typically, one would combine such a setup with the likes of Ansible to aid in maintaining 'host' management. This becomes are more apparent concern when you're working with a large team and want to say centrally manage SSH access. It's far easier to have a playbook setup various public keys, rather than doing this by hand.

However, I want to keep things simple by trying to keep the abstractions to a minimum. It's a good idea to check the terraform plan before applying it, remembering that it grabs the DigitalOcean DO_PAT API key and SSH_FINGERPRINT from the environment,

terraform plan \
  -var "do_token=${DO_PAT}" \
  -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  -var "pvt_key=$HOME/.ssh/id_rsa" \
  -var "ssh_fingerprint=${SSH_FINGERPRINT}"

Once applied, terraform will spin up a new droplet as specified and perform everything provided via remote-exec. Once this completes successfully, you will be left with a *.tfstate file, which is a representation of the state of infrastructure. It will be used the next time you make changes as a means of context.

{
    "version": 1,
    "serial": 1,
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "digitalocean_droplet.inertialbox-balancer-1": {
                    "type": "digitalocean_droplet",
                    "primary": {
                        "id": "30***",
                        "attributes": {
                            "id": "30***",
                            "image": "ubuntu-14-04-x64",
                            "ipv4_address": "128.*",
                            "ipv4_address_private": "10.*",
                            "locked": "false",
                            "name": "inertialbox-balancer-1",
                            "private_networking": "true",
                            "region": "sgp1",
                            "size": "",
                            "ssh_keys.#": "1",
                            "ssh_keys.0": "***",
                            "status": "active"
                        }
                    }
                }
            }
        }
    ]
}

It took less than 30 mins to have multiple folders setup — one for each 'droplet' — allowing me to spin these up,

Live: Provisioning and Deployment

Here's a quick video of running two of these orchestration plans — on the left the load-balancer is being deployed and on the right the first app instance

Once they'd deployed, I needed to place some private SSH keys and 'trigger' the docker deploy script of my choice. There's a certain level of manual intervention at this point, but it does offer a level of flexibility. I can rebuild the load-balancer at any time, updating ip addresses of the failover instances as I wish.

Having run the deploy script, here's the status of the various containers on the 'app-1' instance

CONTAINER ID        IMAGE                                         COMMAND                CREATED             STATUS              PORTS                    NAMES
ae0c53e3f38f        inertialbox/inertialbox-app-failover:latest   "/bin/sh -c 'foreman   42 hours ago        Up 42 hours         0.0.0.0:8081->80/tcp     app-failover
2827c7ec7dc3        inertialbox/inertialbox-app:latest            "/bin/sh -c 'foreman   42 hours ago        Up 42 hours         0.0.0.0:8080->80/tcp     app
75244b185887        mysql:5.7                                     "/entrypoint.sh mysq   2 days ago          Up 2 days           0.0.0.0:3306->3306/tcp   mysql

With the load balancer deploy also having completed, here's the money shot,

inertialbox.com has now been deployed using this stack. The videos you saw above, were from the live deploy. Currently, it comprises of 3-droplets — two in Singapore and a failover app instance at DigitalOcean's NYC3 centre.

Singapore has both the main load-balancer and the primary app instance, saving here is that the load-balancer proxies to the app instance over the private network (bandwidth is free!) - I'll only incur charges when the failover is 'active'.

In terms of cost, 2GB instances at $0.03/hr * 3 = $0.09/hr * 24 = $2.16/day or ~$60/- per month.

Conclusion

So, let's review the benefits of this setup,

Pros

  • Identical stack can be deployed to Amazon AWS or Google Cloud
  • Provisioned instances can be placed in any of the regions, depending on the chosen provisioner.
  • Deploy build can be custom tailored to the application's and developing teams needs, i.e. include Redis and switch to PostgresSQL.

Cons

  • As per my demo, host management via Ansible/Chef has not been demonstrated; however, this could be added quite easily. The workflow would be, (i) provision via terraform, (ii) run Ansible playbook, (iii) run Docker deploy script.
  • Each provisioned node runs in an isolated fashion — this is not a cluster and were unable to receive health checks etc.
What's next?

I will be looking into CoreOS to try and aid into improving this platform and hope to bring you another post shortly.

Until then, I suggest you take the plunge and give DigitalOcean a try — let me know how it went!

Update — Saturday, 15th November 2014

Having added a portfolio showcase section to the Inertialbox site, here's how simple my deploy process was. Simply SSH into the app-1 instance on DigitalOcean,

root@inertialbox-app-1:/opt/docker-rails-deploy# REBUILD=true DEPLOY=true ./deploy.sh
remote: Counting objects: 62, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 62 (delta 27), reused 0 (delta 0)
Unpacking objects: 100% (62/62), done.
From bitbucket.org:bsodmike/inertialbox.com
   15c79a1..662c4b0  master     -> origin/master
Updating 15c79a1..662c4b0
Fast-forward
 app/assets/images/covers/creative_ui_red_bg.jpg       | Bin 0 -> 648806 bytes
 app/assets/images/portfolio/andythornton.jpg          | Bin 0 -> 164142 bytes
 app/assets/images/portfolio/autoglym-professional.jpg | Bin 0 -> 84749 bytes
 app/assets/images/portfolio/autoglym.jpg              | Bin 0 -> 141766 bytes
 app/assets/images/portfolio/poshpaws.jpg              | Bin 0 -> 86999 bytes
 app/assets/stylesheets/theme/inertialbox/base.scss    |  68 ++++++++++++++++++++++++++++++++++-
 app/views/public/index.html.erb                       |  81 +++++++++++++++++++++++++++++++++++++-----
 7 files changed, 140 insertions(+), 9 deletions(-)
 create mode 100644 app/assets/images/covers/creative_ui_red_bg.jpg
 create mode 100644 app/assets/images/portfolio/andythornton.jpg
 create mode 100644 app/assets/images/portfolio/autoglym-professional.jpg
 create mode 100644 app/assets/images/portfolio/autoglym.jpg
 create mode 100644 app/assets/images/portfolio/poshpaws.jpg
Already up-to-date.

===> Fetched latest app changes from git repo 'git@bitbucket.org:bsodmike/inertialbox.com.git'


===> Commencing app rebuild...

Sending build context to Docker daemon 3.665 MB
Sending build context to Docker daemon
Step 0 : FROM inertialbox/rails-nginx-unicorn
# Executing 7 build triggers
Trigger 0, ADD Gemfile /home/app/Gemfile
Step 0 : ADD Gemfile /home/app/Gemfile
 ---> Using cache
Trigger 1, ADD Gemfile.lock /home/app/Gemfile.lock
Step 0 : ADD Gemfile.lock /home/app/Gemfile.lock
 ---> Using cache
Trigger 2, RUN bundle install --without development test
Step 0 : RUN bundle install --without development test
 ---> Using cache
Trigger 3, ADD . /home/app
Step 0 : ADD . /home/app
Trigger 4, RUN mkdir -p /home/app/public/assets
Step 0 : RUN mkdir -p /home/app/public/assets
 ---> Running in af14fa331cdc
Trigger 5, RUN bundle exec rake assets:precompile
Step 0 : RUN bundle exec rake assets:precompile
 ---> Running in b372ce75c3ab
 I, [2014-11-14T20:45:58.005322 #7]  INFO -- : Writing /home/app/public/assets/covers/creative_ui_red_bg-
 eaecfc6d818562481f3bc1ff1e49d45e.jpg
 I, [2014-11-14T20:45:58.008760 #7]  INFO -- : Writing /home/app/public/assets/featured/team-25236e42f8d1
 9262da42fdc3ad4d83f3.jpg
 I, [2014-11-14T20:45:58.010732 #7]  INFO -- : Writing /home/app/public/assets/icons/attendance-85fbc6a0c
 73ace66489a80fa13651ec8.svg
 I, [2014-11-14T20:45:58.012551 #7]  INFO -- : Writing /home/app/public/assets/icons/close-c872e7a0fb259a
 da5f40acca6fc6cf55.svg
 I, [2014-11-14T20:45:58.014375 #7]  INFO -- : Writing /home/app/public/assets/icons/menu-970567837171e83
 018aa13fd7263a978.svg
 I, [2014-11-14T20:45:58.016568 #7]  INFO -- : Writing /home/app/public/assets/portfolio/andythornton-60f
 c200f7d9c87ccc85df338a98be54a.jpg
 I, [2014-11-14T20:45:58.018745 #7]  INFO -- : Writing /home/app/public/assets/portfolio/autoglym-profess
 ional-a701939238476e4ddeba66972708d331.jpg
 I, [2014-11-14T20:45:58.021148 #7]  INFO -- : Writing /home/app/public/assets/portfolio/autoglym-7814f13
 d658d0ff2073eb986fd33cb42.jpg
 I, [2014-11-14T20:45:58.023372 #7]  INFO -- : Writing /home/app/public/assets/portfolio/poshpaws-cf3a5ea
 d0667c022f094a2bf8d049bc2.jpg
 I, [2014-11-14T20:46:02.768546 #7]  INFO -- : Writing /home/app/public/assets/application-7adc94a23f3a1c
 9c75fa631c94c426be.js
 I, [2014-11-14T20:46:08.198599 #7]  INFO -- : Writing /home/app/public/assets/application-8142b3acee80b1
 bb7b6620db1ec69d0f.css
 Trigger 6, RUN chown -R www-data:www-data /home/app/public/assets
 Step 0 : RUN chown -R www-data:www-data /home/app/public/assets
  ---> Running in adf34b9b3f93
  ---> 1b395eb940db
 Removing intermediate container 765369a53c75
 Removing intermediate container af14fa331cdc
 Removing intermediate container b372ce75c3ab
 Removing intermediate container adf34b9b3f93
 Step 1 : MAINTAINER Michael de Silva <michael@inertialbox.com>
  ---> Running in 301005abd3fc
  ---> 817ec48833ff
 Removing intermediate container 301005abd3fc
 Step 2 : EXPOSE 80
  ---> Running in b316c50dbec4
  ---> 78e233ca0df7
 Removing intermediate container b316c50dbec4
 Successfully built 78e233ca0df7
 Sending build context to Docker daemon  1.33 MB
 Sending build context to Docker daemon
 Step 0 : FROM inertialbox/rails-nginx-unicorn-failover
 # Executing 7 build triggers
 Trigger 0, ADD Gemfile /home/app/Gemfile
 Step 0 : ADD Gemfile /home/app/Gemfile
  ---> Using cache
 Trigger 1, ADD Gemfile.lock /home/app/Gemfile.lock
 Step 0 : ADD Gemfile.lock /home/app/Gemfile.lock
  ---> Using cache
 Trigger 2, RUN bundle install --without development test
 Step 0 : RUN bundle install --without development test
  ---> Using cache
 Trigger 3, ADD . /home/app
 Step 0 : ADD . /home/app
 Trigger 4, RUN mkdir -p /home/app/public/assets
 Step 0 : RUN mkdir -p /home/app/public/assets
  ---> Running in 4a077701e79d
 Trigger 5, RUN bundle exec rake assets:precompile
 Step 0 : RUN bundle exec rake assets:precompile
  ---> Running in 935b0e069c4a
 I, [2014-11-14T20:46:13.284143 #8]  INFO -- : Writing /home/app/public/assets/featured/team-25236e42f8d1
 9262da42fdc3ad4d83f3.jpg
 I, [2014-11-14T20:46:13.286598 #8]  INFO -- : Writing /home/app/public/assets/icons/attendance-85fbc6a0c
 73ace66489a80fa13651ec8.svg
 I, [2014-11-14T20:46:13.288566 #8]  INFO -- : Writing /home/app/public/assets/icons/close-c872e7a0fb259a
 da5f40acca6fc6cf55.svg
 I, [2014-11-14T20:46:13.290420 #8]  INFO -- : Writing /home/app/public/assets/icons/menu-970567837171e83
 018aa13fd7263a978.svg
 I, [2014-11-14T20:46:18.790460 #8]  INFO -- : Writing /home/app/public/assets/application-7adc94a23f3a1c
 9c75fa631c94c426be.js
 I, [2014-11-14T20:46:24.291391 #8]  INFO -- : Writing /home/app/public/assets/application-0a7b0f5a3ef9eb
 7a3f77238d8eec4989.css
 Trigger 6, RUN chown -R www-data:www-data /home/app/public/assets
 Step 0 : RUN chown -R www-data:www-data /home/app/public/assets
  ---> Running in 725e420a2db0
  ---> 772d5127cf1f
 Removing intermediate container 1e3eef603bcf
 Removing intermediate container 4a077701e79d
 Removing intermediate container 935b0e069c4a
 Removing intermediate container 725e420a2db0
 Step 1 : MAINTAINER Michael de Silva <michael@inertialbox.com>
  ---> Running in 8b2e40d4b62d
  ---> 0cba1afceeac
 Removing intermediate container 8b2e40d4b62d
 Step 2 : EXPOSE 80
  ---> Running in ef4fc325b690
  ---> a390144b05f8
 Removing intermediate container ef4fc325b690
 Successfully built a390144b05f8

 ===> Completed app rebuild.


 ===> Removing stale images.

 Error response from daemon: Conflict, cannot delete c96d068b1974 because the running container 49e139669
 d43 is using it, stop it and use -f to force
 Error response from daemon: Conflict, cannot delete 290d4400eaf4 because the running container 87eec696d
 69a is using it, stop it and use -f to force
 2014/11/14 15:46:25 Error: failed to remove one or more images

 ===> Stopping and removing app container.


 ===> Stopping and removing app-failover container.


 ===> Linking and running app instance...

 ef236c590d69bd999e1360a2802e630996dc90179270e718a17843b94ef19e21

 ===> Linking and running app failover instance...

 0ff13a1fa670e836dc3fd0d7b8563a4252f13701e31455950323e577887725d2

 ===> Done.

All I had to do was run a single command REBUILD=true DEPLOY=true ./deploy.sh inside REBUILD=true DEPLOY=true ./deploy.sh.

Rinse and repeat for app-2. Once I script this as well, it'll truly be a single-click deploy.