I am at a point now where the only things I need installed on my host machine (OS X) to do development are:

  • git
  • boot2docker
  • A code editor

Editing source code and version control management are handled in the host machine. The code is mounted inside a guest machine which is a docker container. Any dependencies are installed (programming languages, debuggers, linux packages) here and the code can be executed. A terminal window for each machine is used to manage this.

The same Makefile on both the host machine and mounted in the docker container is used as a store for alias commands.

As an example of this, I have recently been working on a repository of JavaScript programming exercises https://github.com/kolodny/exercises that is managed by npm. Here is the Makefile for this project:

## docker commands to be run in host machine
dockerRun:
	docker run -it --name exercises -v ${PWD}/:/usr/src/exercises -w /usr/src/exercises iojs /bin/bash

dockerStart:
	docker start -ai exercises

## to be run inside container
CURRENT_EXERCISE = memoize

run:
	iojs $(CURRENT_EXERCISE)/index.js

watch:
	watch -n 0.5 make run

test:
	cd $(CURRENT_EXERCISE); npm test

install:
	npm install

.PHONY: test

(If you have never used docker before think of images as the original ‘master version’ of something and containers as the cheaply made copies created from the image that you can easily throw-away and recreate super fast).

make dockerRun will create a container in the host machine to run the code if one does not already exist. Broken down:

  • docker run create a new container. The format is docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
  • -it allocate a pseudo-tty and keep stdin open even if not attached - basically after the initialisation is done finish sitting at a command line prompt inside the running container.
  • --name exercises the name given to the container. Used in the dockerStart Makefile command
  • -v ${PWD}/:/usr/src/exercises mount the current directory in the host machine to the location /usr/src/exercises inside the container.
  • -w /usr/src/exercises directory for the command line prompt to be sitting at after the setup is finished. If you omit this it defaults to /
  • iojs the name of the docker image used to build the container from. iojs is the name of the official iojs image on https://registry.hub.docker.com. If your host machine does not already have a local copy of the image it downloads it.
  • /bin/bash to be running bash. Otherwise, for this container it would be running the iojs interactive repl.

Because only iojs is needed for this project you can build the container using this one-liner using the official iojs image. If multiple things were needed (say ruby, iojs, a linux package) you would create a Dockerfile starting with a base image like iojs, add the additional requirements using Dockerfile commands, get docker to run that file to build the image, and then create a container from it.

make dockerStart to resume the container with the same options originally set by the run command. If you stop the container or sleep your host machine.

The next commands are run from the bash prompt inside the container e.g. make test. This project has a directory for each exercise so a variable is used CURRENT_EXERCISE to easily update the commands for a different exercise.

.PHONY resolve the issue if you want a make command that has the same name as a directory.

Journey to this setup

Beyond three years ago I would install everything on the same host machine and if I was lucky projects would have a mechanism to manage versioning; like rvm for the version of Ruby required. Overall bad times. Difficult to experiment without blowing things up globally and taking time to fix.

Switched to using Vagrant with an Ubuntu image. A big improvement as project environments were now sandboxed. The environments were set up using puppet modules from Puppet Forge.

Then six months ago I moved from Vagrant to Docker.

After spending a month learning docker I feel I can move so much quicker:

Containers start up instantly and I trash and recreate them all the time.

Images build fast as well. If you create an image from a Dockerfile every command on each line creates a new image internally. So if you keep screwing up one of the lines or want to make an amendment Docker can start right from that line instantly. Immutable object system for the win.

You can use images from the Docker hub and can build on top of them and create a custom image. The example in this blog post used the iojs image. Want a container for Redis? use the official Redis image with everything set up for you. Want to start with something really tiny? The Alpine Linux image is 5MB.

If you need to do some setup beyond the commands available to a Dockerfile the tendency is to use a shell script and run it from the Dockerfile. No Puppet or other provisioners*. After a couple of years with Puppet, personally I am glad to be back to shell scripts for my purposes. Faster feedback loop, less cryptic errors, no need to install Ruby inside the container, greater number of people with experience, decades of resources.

It encourages splitting your system components by having them in different containers and using one of the several mechanisms available to link them together. This can increase some costs compared to the convenience of having everything in a single virtual machine like for logging and monitoring, but I think is a better approach.

*You can of course provision vagrant environments with shell scripts. The difference is I have found almost no mention of Puppet of Chef in Docker resources, whilst with vagrant it seemed the norm was to provision with one of these.