Today, we’ll go on yet another trip through container land. I’ve found myself itching to throw anything and everything that I may run on a server into a container. It is so easy to create a Dockerfile, build my image, then deploy it wherever I want with Docker Machine that I tend to do it first thing when I have something new to run. Being able to write it once and have it run anywhere really is powerful stuff.

So that said, I was reading a bit the other day about grid computing. You may have heard of some of the interesting grid projects like SETI@Home. The idea here is that if you have a machine that often sits idle, you can donate your CPU (and/or GPU) to do some number crunching for the cause. I was thinking to myself upon reading this that I have some VMs that sit around idle for quite a bit of time unless I’m actively prototyping something, so why not give this a shot? I was also surprised to learn that a lot of these projects have standardized on using the same software suite, called BOINC, and you just associate to the project of your choice at launch time for the application. Sounds like a nice idea for a container!

##Choose Project##

  • Pick which projects you’re interested in by starting at the project page of BOINC’s website. I picked SETI@Home, Rosetta, and World Community Grid.

  • You’ll need to create accounts on all of the projects that you are interested in. Once completed, take note of the account keys for each. We will need them later.

##Create A Docker Image##

  • Create a new Dockerfile in a directory.

  • Initialize our Dockerfile by adding a FROM and MAINTAINER section. I picked a Ubuntu 14.04 base image to build off of.

FROM ubuntu:14.04
MAINTAINER Spencer Smith <>
  • Next, we will install the BOINC client. Luckily, it is included in Ubuntu’s repos, so it isn’t very difficult.
##Install BOINC
RUN apt-get update && apt-get install -y boinc-client
  • We will then want to set our working directory to be that of the BOINC client’s lib directory. This will allow our commands to complete successfully next.
##Set working directory
WORKDIR /var/lib/boinc-client
  • We will now set the default command for our image. This command (which is admittedly a bit long) will start BOINC’s client service, sleep shortly, use the ‘boinccmd’ tool to attach a project, then tail out the stdout/stderr logs for the client. You may notice the ‘${boincurl}’ and ‘${boinckey}’ sections of the command. Those are environment variables that will point us to the project we wish to connect to. You will see these in use later when we launch our container.
##Run BOINC by default. Expects env vars for url and account key
CMD /etc/init.d/boinc-client start; sleep 5; /usr/bin/boinccmd --project_attach ${boincurl} ${boinckey}; tail -f /var/lib/boinc-client/std*.txt
  • That’s it for the Dockerfile. Save it and exit. Here’s the complete file:
FROM ubuntu:14.04
MAINTAINER Spencer Smith <>

##Install BOINC
RUN apt-get update && apt-get install -y boinc-client

##Set working directory
WORKDIR /var/lib/boinc-client

##Run BOINC by default. Expects env vars for url and account key
CMD /etc/init.d/boinc-client start; sleep 5; /usr/bin/boinccmd --project_attach ${boincurl} ${boinckey}; tail -f /var/lib/boinc-client/std*.txt
  • We can now build our image by running docker build -t rsmitty/boinc . in the directory. Feel free to tag differently of course.

##Start Crunching!##

Now that we have an image to use, let’s launch some containers.

  • First, find your proper docker endpoint with docker-machine ls. I’ll be using my digital ocean docker host for this tutorial.
Spencers-MacBook-Pro:boinc spencer$ docker-machine ls
NAME      ACTIVE   DRIVER         STATE     URL                         SWARM
default            virtualbox     Stopped
do-dev             digitalocean   Running   tcp://REDACTED:2376
  • Set your docker environment variables to the proper values with
eval "$(docker-machine env do-dev)"
  • Launch a container, substituting your own desired values for boinckey and boincurl. You should be able to find these values from the account settings for the sites you registered earlier. Also feel free to name your container as you see fit.
docker run -ti -d --name wcg -e "" -e "boinckey=1234567890" rsmitty/boinc
  • Once launched, we can peek in on our jobs by either allocating a new TTY to the container with docker exec -ti wcg /bin/bash or docker logs wcg
Spencers-MacBook-Pro:boinc spencer$ docker logs wcg
 * Starting BOINC core client: boinc                                     [ OK ]
 * Setting up scheduling for BOINC core client and children:


29-Aug-2015 15:48:49 [World Community Grid] Started download of 933fbd61802442e2861afa0b31aedcc6.pdbqt
29-Aug-2015 15:48:51 [World Community Grid] Finished download of 933fbd61802442e2861afa0b31aedcc6.pdbqt

That’s it! You can check in on your accomplishments for each project in your account settings. You can find my image for this tutorial in the Docker Hub. If you wish to learn more about the BOINC project itself, please visit their website.

As I’ve been learning more about the container ecosystem, I’ve come across the concept of JeOS (just enough operating system). The idea here is that you want to gain as much performance out of your Docker containers as possible, so you minimize the cruft from your host operating system. There are several different JeOS options, but today we’ll talk about RancherOS. RancherOS is a very small, ~20MB, OS that you can use as a Docker host. Everything in RancherOS runs inside of a container and Docker itself runs as pid 1. RancherOS ships as an ISO, so today, I’ll guide you through using the ISO to create a QCOW image for use in OpenStack.

##Setup KVM##

  • First, ensure you have a proper KVM environment setup. This can involve quite a bit of configuration between making sure virtualization is allowed in the BIOS, your CPU supports it, etc.. I followed these directions on a new Ubuntu machine and it worked just fine.

  • You can test that your KVM setup is working properly by issuing virsh list. That should return an empty list:

root@ubuntu:/home/rsmitty# virsh list
 Id    Name                           State
  • Finally, install the virt-install tool with sudo apt-get install virtinst.

##Install Packer##

  • Now, we want to use Packer to build our image so we need to download it and get it installed properly. You can find directions on setting up Packer here.

  • If you need an intro to Packer in general, I’ve written another guide that was published to the Solinea website. You can find that here.

##Create Our Templates##

Now that we are all set up, we need to create two files, a cloud-config.yml file that gets injected into RancherOS and a Packer template called kvm-rancheros.json that we’ll use to build our QCOW.

  • Create a file called cloud-config.yml with the following content. Be sure to modify the ssh public key with your own, so that it gets baked into the image. Unfortunately, there’s no injection of keys during boot in OpenStack for RancherOS, so take care to make sure you get the correct one in there. Here’s what my cloud-config.yml looked like:

  - ssh-rsa ... spencer@Spencers-MacBook-Pro.local
  • There are lots of options for building images with Packer, so it can be a bit daunting at first. For our purposes, we will need to use the KVM builder directly and pass in the RancherOS ISO. Once booted, Packer will scp our cloud-config.yml file into the temporary instance and then issue the proper command to install RancherOS to disk. After this is complete, Packer will provide output on the location of the QCOW image. This image will simply be called “rancheros” and that path to it will be “$PWD/output_rancher/rancheros”. Here’s the full template:
      "type": "qemu",
      "iso_url": "",
      "iso_checksum_type": "md5",
      "iso_checksum": "63b54370f8c5f8645d6088be15ab07b0",
      "output_directory": "output_rancheros",
      "ssh_wait_timeout": "30s",
      "shutdown_command": "sudo shutdown -h now",
      "disk_size": 1024,
      "format": "qcow2",
      "headless": true,
      "accelerator": "kvm",
      "ssh_username": "rancher",
      "ssh_password": "rancher",
      "ssh_port": 22,
      "ssh_wait_timeout": "90m",
      "vm_name": "rancheros",
      "net_device": "virtio-net",
      "disk_interface": "virtio",
      "boot_wait": "5s",
      "qemuargs": [
        [ "-m", "1024M" ]
  "provisioners": [
     "type": "file",
     "source": "cloud_config.yml",
     "destination": "/home/rancher/cloud_config.yml"
    "type": "shell",
    "inline": [
      "sleep 5",
      "sudo rancheros-install -f -c cloud_config.yml -d /dev/vda"

Notice the ‘-m’ flag in the qemuargs section. You MUST have at least 1GB of RAM to complete the install successfully.

##Build and Upload##

  • It’s now time to build our image. Packer will fetch and verify the RancherOS ISO for us and proceed to take care of all of the necessary commands. Issue packer build kvm-rancheros.json.

  • Once our build is complete, we’ll want to upload it to Glance. This image is ~40MB, so it shouldn’t take a terribly long time. You can issue the following command (after sourcing your OpenStack credentials):

glance image-create --name "RancherOS" \
--is-public false --disk-format qcow2 \
--container-format bare --file $PWD/output_rancher/rancheros

##Launch An Instance & Connect##

  • We can now spin up our RancherOS instance inside of OpenStack. Ensure that you have ports 22, 80, and 2376 allowed in the security group that you choose to use for your instance.

  • Once our instance has been created, we will use Docker Machine’s generic driver to connect to our launched instance. Again, because SSH keys aren’t injected into RancherOS, we can’t use the OpenStack driver for Docker Machine and have it launch the instance for us. Here is the command that I used for connecting Docker Machine, notice I passed the path to the SSH key I injected earlier.

docker-machine create -d generic --generic-ssh-user rancher \
--generic-ssh-key ~/.ssh/id_rsa --generic-ip-address \
  • Set rancher-dev as our Docker endpoint with the following:
eval "$(docker-machine env rancher-dev)"
  • You can ensure everything is connected with a combination of docker-machine ls and docker ps. I like to echo a dashed line to give some separation in my commands.
spencers-mbp:~ spencer$ docker-machine ls && echo "------------" && docker ps
NAME          ACTIVE   DRIVER         STATE     URL                         SWARM
rancher-dev   *        generic        Running   tcp://
vbox-dev               virtualbox     Running   tcp://
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

##Grow The Root Volume & Deploy A Container##

  • Again, keep in mind that RancherOS doesn’t currently do some cloud-init functions like growing the root volume. We’ll need to launch a privileged container to do this. Issue the following:
docker run --privileged -i --rm ubuntu bash << EOF
apt-get update
apt-get install -y cloud-guest-utils parted
growpart /dev/vda 1
resize2fs /dev/vda1

Credit to Darren Shepherd for this script mentioned here

  • Launch my test-webserver container by issuing the following:
docker run -d -p 80:80 rsmitty/test-webserver /usr/sbin/apache2ctl -D FOREGROUND
  • Check out the result!

This post is simply here to document how to remove untagged and exited images and containers.

##Exited Images## You can find previously exited images by using filters:

docker ps -f "status=exited"

This will return the full text for each exited container:

Spencers-MacBook-Pro:~ spencer$ docker ps -f "status=exited"
CONTAINER ID        IMAGE                                           COMMAND                CREATED             STATUS                      PORTS               NAMES
72b10aebd2b0        rsmitty/ostack                                                     "/bin/sh -c /bin/bas   9 hours ago         Exited (126) 9 hours ago                        distracted_wilson
aa6c9cfc662f        rsmitty/ostack                                                     "/bin/sh -c /bin/bas   9 hours ago         Exited (0) 9 hours ago                          sick_einstein
1bfce6a324b5        ubuntu:14.04                                                       "/bin/bash"            9 hours ago                                                         jovial_einstein

To remove all of them, you can nest a command similar to the one above inside the docker rm command:

docker rm $(docker ps -qf "status=exited")

Docker responds with a list of IDs that it deleted:

Spencers-MacBook-Pro:~ spencer$ docker rm $(docker ps -qf "status=exited")

##Untagged Images## If you wish to clean up untagged instances you can find them with another filter command, similar to the one above:

docker images -f "dangling=true"

This will return a formatted list of the untagged images:

Spencers-MacBook-Pro:~ spencer$ docker images -f "dangling=true"
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
<none>              <none>              bfa68ad8ff4c        23 hours ago        457.1 MB
<none>              <none>              0043ceae2104        23 hours ago        457.1 MB
<none>              <none>              edff0ab07895        23 hours ago        421.3 MB
<none>              <none>              6ae539b22ab9        23 hours ago        421 MB

And again, nesting a variation of that command to actually do the cleanup:

docker rmi $(docker images -qf "dangling=true")

Now something interesting happens!

Spencers-MacBook-Pro:~ spencer$ docker rmi $(docker images -qf "dangling=true")
Error response from daemon: Conflict, cannot delete bfa68ad8ff4c because the running container 7e5c96166fcb is using it, stop it and use -f to force
Deleted: edff0ab0789548cf33db3589eae5cc93589e7aea379bc3383f58c00b71ebb8cb
Deleted: e619828bd6f049d81a1920b96634534044ab0bf8f1dd4e40d9daf82d9a5c80b6
Deleted: ac0a2e7c0897058649e9e31cd4a319ee08158646990a607a54a0492f27e6e275
Error: failed to remove images: [bfa68ad8ff4c]

If the images are in use by some container, you must first stop the container. You’ll have to resolve this in order to remove this image. This is a good thing though, it can keep you from blowing yourself up :)

This one’s a weird one. I was trying to figure out some interesting containers to build when I overheard someone at work expressing difficulty trying to install the OpenStack client CLIs onto his machine. I thought to myself, what if I could install these once and just push them to whatever environment I please? Or even share it with other folks to use? Here’s how you can do it:

##Create A Dockerfile## First I got the basics down for creating a Dockerfile to install the proper CLI packages. It was handy to simply boot a ubuntu:14.04 container and test these steps out manually first. That’s an easy one, just do:

docker run -ti ubuntu:14.04 /bin/bash

From that point I did a little trial and error to figure out the basics of installing pip, installing the openstack client packages inside of pip, and throwing in a few extra dependencies that I encountered. Here’s the full Docker file, we’ll talk about the script that gets added in next:

FROM ubuntu:14.04
MAINTAINER Spencer Smith <>

##Install pip and necessary dependencies for clients
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python-dev\

##Install ssl patches for python 2.7, then install clients
RUN pip install six --upgrade\

##Upload our creds checker and set it as our entrypoint
ADD /ostack/
CMD /ostack/

##Deal With Credentials## After creating the Dockerfile, I wanted to make sure that I could create a container general enough for others to use if they wanted. As such, I wanted most of the normal OpenStack environment variables to be passed in through the command line arguments. To handle this, I wrote a quick bash script called to add into the container.

The file will just ensure that the OS_AUTH_URL, OS_REGION_NAME, OS_TENANT_NAME, and OS_USERNAME variables are present in the environment, then it will prompt the user for their password so that they don’t have to put it in plaintext inside the docker run command. Finally, once all the info is present, the script will simply launch a bash session for the user.

Here’s the full script:


##Ensure everything but password is passed in as env variable
  if [[ -z ${!var} ]]; then
    echo "${var} is unset. Please pass as a env variable input!"
    exit 1

##Prompt for password input
echo "Please enter your OpenStack Password: "

##Start a bash prompt

##Build The Container## Now, we simply need to build our container. You can give this your own tag if you like. Here’s what my docker build command looked like:

docker build -t rsmitty/ostack .

Ensure you are in the same directory with your Dockerfile.

##Use It!##

We can now launch our container and use it to talk to an OpenStack cloud. Ensure that the proper environment variables are passed in using the -e flag. Here’s what my docker run command looks like:

docker run -ti -e OS_AUTH_URL=https://REDACTED_URL:5000/v2.0/ -e OS_REGION_NAME=RegionOne -e OS_TENANT_NAME=admin -e OS_USERNAME=spencer rsmitty/ostack

Once the run has put you in the bash prompt, you should be able to use your environment!

root@ecedc1a96a49:/# cinder list
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
| 2032053f-9809-4c29-b8f2-731fef7a01db | available |     test     |  2   |    iscsi    |  false   |             |

This post will go into some detail about how to get started with docker-machine. Docker Machine is a really nice tool to aid in deploying docker hosts across any environment. It seems to be quickly becoming the standard for creating dev environments. Today, I’ll go through how to talk to OpenStack with Docker Machine and then deploy a quick container.

##Install Docker Machine##

I installed both Docker and Docker Machine with Homebrew. There are several other installation options that you can find in the “Installation” sections here and here.

For Homebrew, simply issue this in the terminal:

brew install docker-machine

##Prepare Openstack##

There’s a few things that we need to do on the OpenStack side in order to ensure that our machine can be created successfully. First, go ahead and source your keystone credentials. Docker Machine will use these environment variables if they are present. This keeps us from having to pass a bunch of authentication parameters later on.

Next, take a look at your OpenStack environment. We’ll need to gather up some IDs and ensure some things are setup properly. First, take a look at your security group that you plan to use and ensure that SSH access is allowed into it. It’s also important to note here that you’ll want to allow any ports that you plan to map into your containers. For me, I allowed ports 22 and 80 initially. Now, let’s gather some IDs. I needed to find the ssh user for my image type (Centos 7), the image ID, the flavor I wished to use, the floating-ip pool name, and finally the security group that I wanted to use.

##Create Our Machine##

We’re finally ready to create our machine. Using the IDs I found above, here is the (extremely verbose) command that I issued:

docker-machine create --driver openstack\
 --openstack-ssh-user centos\
 --openstack-image-id cfb0a24a-16a5-4d19-a15b-ee29c9375d52\
 --openstack-flavor-name m1.small\
 --openstack-floatingip-pool public\
 --openstack-sec-groups default\

Be patient here. I found that creating the machine took quite a while, as the docker-machine command will SSH into the instance and do some long-running tasks like ‘yum upgrade’.

Once complete, we’ll want to override our built in docker settings to point to our new machine. We can do that by issuing:

eval "$(docker-machine env docker-dev)"

Finally, we’ll want to ensure that our machine is totally up to date by issuing the following:

docker-machine upgrade docker-dev

##Write A Test Container##

Now that we have a working Docker Machine in OpenStack, let’s try deploying something fun to it. First, we’ll create a Dockerfile to simply install Apache and push a little image and a webpage.

In a test directory, I created three files: Dockerfile, index.html, and logo.png. Here’s the contents of each file:


FROM ubuntu:14.04
MAINTAINER Spencer Smith <>
RUN apt-get update
RUN apt-get install -y apache2
ADD index.html /var/www/html/index.html
ADD logo.png /var/www/html/logo.png
RUN chmod 777 /var/www/html/logo.png


<img src="logo.png" width="300" height="300"/>
<h3>Hello, World!</h3>


Finally, we’ll build our container image. Change into the directory that contains the files we just created and issue docker build. I’m also supplying a tag so that I can easily identify my apache container that I’m building. The docker build command can take a little while to complete, as there’s a lot happening with the update and installation of apache2.

docker build -t rsmitty/apache .

##Test It Out##

Now that our image has been created, it’s time to test it out by launching our new container in our machine. We can do that simply by calling the docker run command. Note that we will launch apache in the foreground so that it continues running and keeps our container up.

docker run -d -p 80:80 rsmitty/apache /usr/sbin/apache2ctl -D FOREGROUND

Point your browser to the IP address of our machine and see the results!