Every now and again I get some pretty interesting questions from clients that stick with me. And rarer than that, I have a bit of free time and get a chance to delve into some of these stranger questions and figure out how you would actually accomplish them. Such is the case with the question “How do we listen to the Kubernetes clusters we’re spinning up and add their resources to an internal registry of systems?”. Aren’t we supposed to not care that much about our pods, and just let Kubernetes work it’s magic? Yes! But hey, sometimes you have to do weird stuff in the enterprise…

So I took this question as a bit of an opportunity to learn a bit more about golang, since my only real experience with it was looking through the Kubernetes and Docker Engine repos from time to time. Luckily, I was able to successfully hack together just enough to act on the creation and deletion of pods in my cluster. I thought this might make for an interesting blog post so other folks can see how it’s done and how one might extend this to do some more robust things. Also, you should expect this to also be a bit of a golang intro.

Learning by Example

Being that I was pretty new to golang, I felt like I needed a good example to get started parsing and learning about. I recalled from a conversation with a colleague that this type of event sniffing is pretty much exactly how KubeDNS works. The kube2sky program acts as a bridge between Kubernetes and the SkyDNS containers that run as part of the DNS addon in a deployed cluster. This program looks for the creation of new services, endpoints, and pods and then configures SkyDNS accordingly by pushing changes to etcd. This was a wonderful starting point, but it took me quite a while to grok what was happening and, after doing so, I just wanted to boil this program down to the basics and do something a bit simpler.

Hack Away

Let’s get started hacking on our k8s-sniffer program.

  • Create a file called k8s-sniffer.go on your system under $GOPATH/src/k8s-sniffer. I’m going to operate under the assumption that you’ve got go already installed.

  • Let’s add the absolute basics for a standard go program: package, imports, and main function definition

package main

import(
//Import necessary external packages
)

func main(){
//Implement main function
}
  • We’ve got the bare bones, now let’s look at importing the thing’s we’ll actually need from Kubernetes’ go packages. Update your import section to look like:
import (
	"fmt"
	"log"
	"net/http"
	"time"

	"k8s.io/kubernetes/pkg/api"
	"k8s.io/kubernetes/pkg/client/cache"
	"k8s.io/kubernetes/pkg/client/restclient"
	client "k8s.io/kubernetes/pkg/client/unversioned"
	"k8s.io/kubernetes/pkg/controller/framework"
	"k8s.io/kubernetes/pkg/fields"
	"k8s.io/kubernetes/pkg/util/wait"
)
  • Notice the imports at the top look different that the bottom. This is because the ones at the top are golang built-ins. The second ones are from github repositories and go will pull them down for you.

  • Go ahead and pull down these dependencies (it’ll take a while) by running go get -v in the directory containing k8s-sniffer.go

  • Now let’s get started hacking on the main function. After looking through kube2sky, I knew that I needed to do three things in my main, authenticate to the cluster, call a watcher function, and keep my service alive. You can do this by updating main to look like:

func main() {

	//Configure cluster info
	config := &restclient.Config{
		Host:     "https://xxx.yyy.zzz:443",
		Username: "kube",
		Password: "supersecretpw",
		Insecure: true,
	}

	//Create a new client to interact with cluster and freak if it doesn't work
	kubeClient, err := client.New(config)
	if err != nil {
		log.Fatalln("Client not created sucessfully:", err)
	}

	//Create a cache to store Pods
	var podsStore cache.Store

	//Watch for Pods
	podsStore = watchPods(kubeClient, podsStore)

	//Keep alive
	log.Fatal(http.ListenAndServe(":8080", nil))

}
  • Notice above that some of the configs need to be changed to match your own environment.

  • Also notice that many of the functions we’re using in this main function come from other packages we’ve imported.

  • If you were to run this program now, the compiler would complain about the fact that you have told it to use the watchPods function, but it doesn’t actually exist yet. Create this function above main:

func watchPods(client *client.Client, store cache.Store) cache.Store {

	//Define what we want to look for (Pods)
	watchlist := cache.NewListWatchFromClient(client, "pods", api.NamespaceAll, fields.Everything())

	resyncPeriod := 30 * time.Minute

	//Setup an informer to call functions when the watchlist changes
	eStore, eController := framework.NewInformer(
		watchlist,
		&api.Pod{},
		resyncPeriod,
		framework.ResourceEventHandlerFuncs{
			AddFunc:    podCreated,
			DeleteFunc: podDeleted,
		},
	)

	//Run the controller as a goroutine
	go eController.Run(wait.NeverStop)
	return eStore
}
  • And finally, in this function, you’ll notice that there are two handler functions called when the watchlist is updated. Create podCreated and podDeleted:
func podCreated(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod created: "+pod.ObjectMeta.Name)
}

func podDeleted(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod deleted: "+pod.ObjectMeta.Name)
}
  • The full file now looks like:
package main

import (
	"fmt"
	"log"
	"net/http"
	"time"

	"k8s.io/kubernetes/pkg/api"
	"k8s.io/kubernetes/pkg/client/cache"
	"k8s.io/kubernetes/pkg/client/restclient"
	client "k8s.io/kubernetes/pkg/client/unversioned"
	"k8s.io/kubernetes/pkg/controller/framework"
	"k8s.io/kubernetes/pkg/fields"
	"k8s.io/kubernetes/pkg/util/wait"
)

func podCreated(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod created: "+pod.ObjectMeta.Name)
}

func podDeleted(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod deleted: "+pod.ObjectMeta.Name)
}

func watchPods(client *client.Client, store cache.Store) cache.Store {

	//Define what we want to look for (Pods)
	watchlist := cache.NewListWatchFromClient(client, "pods", api.NamespaceAll, fields.Everything())

	resyncPeriod := 30 * time.Minute

	//Setup an informer to call functions when the watchlist changes
	eStore, eController := framework.NewInformer(
		watchlist,
		&api.Pod{},
		resyncPeriod,
		framework.ResourceEventHandlerFuncs{
			AddFunc:    podCreated,
			DeleteFunc: podDeleted,
		},
	)

	//Run the controller as a goroutine
	go eController.Run(wait.NeverStop)
	return eStore
}

func main() {

	//Configure cluster info
	config := &restclient.Config{
		Host:     "https://xxx.yyy.zzz:443",
		Username: "kube",
		Password: "supersecretpw",
		Insecure: true,
	}

	//Create a new client to interact with cluster and freak if it doesn't work
	kubeClient, err := client.New(config)
	if err != nil {
		log.Fatalln("Client not created sucessfully:", err)
	}

	//Create a cache to store Pods
	var podsStore cache.Store

	//Watch for Pods
	podsStore = watchPods(kubeClient, podsStore)

	//Keep alive
	log.Fatal(http.ListenAndServe(":8080", nil))
}

Fire Away

  • We can finally run our file and see events being created when new Pods are created or destroyed! You’ll see several alerts when you first run since the pods are getting added to the store.
spencers-mbp:k8s-siffer spencer$ go run k8s-sniffer.go
Pod created: dnsmasq-vx2sw
Pod created: default-http-backend-0zj29
Pod created: nginx-ingress-lb-xgvin
Pod created: kubedash-3370066188-rmy2n
Pod created: dnsmasq-gru7c
Pod created: kubernetes-dashboard-imtnm
Pod created: kube-dns-v11-dhgyx
Pod created: test-rc-h7v6l
Pod created: test-rc-3l1oo
  • Try scaling down an RC to see the delete: kubectl scale rc test-rc --replicas=0
Pod deleted: test-rc-h7v6l
Pod deleted: test-rc-3l1oo

Hope this helps!

Back after a pretty lengthy intermission! Today I want to talk about Kubernetes. I’ve recently had some clients that have been interested in running Docker containers in a production environment and, after some research and requirement gathering, we came to the conclusion that the functionality that they wanted was not easily provided with the Docker suite of tools. These are things like guaranteeing a number of replicas running at all times, easily creating endpoints and load balancers for the replicas created, and enabling more complex deployment methodologies like blue/green or rolling updates.

As it turns out, all of this stuff is included to some extent or another with Kubernetes and we were able to recommend that they explore this option to see how it works out for them. Of course, recommending is the easy part, while implementation is decidedly more complex. The desire for the proof of concept was to enable multi-cloud deployments of Kubernetes, while also remaining within their pre-chosen set of tools like Amazon AWS, OpenStack, CentOS, Ansible, etc.. To accomplish this, we were able to create a Kubernetes deployment using Hashicorp’s Terraform, Ansible, OpenStack, and Amazon. This post will talk a bit about how to roll your own cluster by adapting what I’ve seen.

Why Would I Want to do This?

This is totally a valid question. And the answer here is that you don’t… if you can help it. There are easier and more fully featured ways to deploy Kubernetes if you have open game on the tools to choose. As a recommendation, I would say that using Google Container Engine is by far the most supported and pain-free way to get started with Kubernetes. Following that, I would recommend using Amazon AWS and CoreOS as your operating system. Again, lots of people using these tools means that bugs and gotchas are well documented and easier to deal with. It should also be noted that there are OpenStack built-ins to create Kubernetes clusters, such as Magnum. Again, if you’re a one-cloud shop, this is likely easier than rolling your own.

Alas, here we are and we’ll search for a way to get it done!

What Pieces are in Play?

For the purposes of this walkthrough, there will be four pieces that you’ll need to understand:

  • OpenStack - An infrastructure as a service cloud platform. I’ll be using this in lieu of Amazon.
  • Terraform - Terraform allows for automated creation of servers, external IPs, etc. across a multitude of cloud environments. This was a key choice to allow for a seamless transition to creating resources in both Amazon and OpenStack.
  • Ansible - Ansible is a configuration management platform that automates things like package installation and config file setup. We will use a set of Ansible playbooks called KubeSpray Kargo to setup Kubernetes.
  • Kubernetes - And finally we get to K8s! All of the tools above will come together to give us a fully functioning cluster.

Clone KubeSpray’s Kargo

First we’ll want to pull down the Ansible playbooks we want to use.

  • If you’ve never installed Ansible, it’s quite easy on a Mac with brew install ansible. Other instructions can be found here.

  • Ensure git is also installed with brew install git.

  • Create a directory for all of your deployment files and change into that directory. I called mine ‘terra-spray’.

  • Issue git clone git@github.com:kubespray/kargo.git. A new directory called kargo will be created with the playbooks:

Spencers-MBP:terra-spray spencer$ ls -lah
total 104
drwxr-xr-x  13 spencer  staff   442B Apr  6 12:48 .
drwxr-xr-x  12 spencer  staff   408B Apr  5 16:45 ..
drwxr-xr-x  15 spencer  staff   510B Apr  5 16:55 kargo
  • Note that there are a plethora of different options available with Kargo. I highly recommend spending some time reading up on the project and the different playbooks out there in order to deploy the specific cluster type you may need.

Create Terraform Templates

We want to create two terraform templates, the first will create our OpenStack infrastructure, while the second will create an Ansible inventory file for kargo to use. Additionally, we will create a variable file where we can populate our desired OpenStack variables as needed. The Terraform syntax can look a bit daunting at first, but it starts to make sense as we look at it more and see it in action.

  • Create all files with touch 00-create-k8s-nodes.tf 01-create-inv.tf terraform.tfvars The .tf and .tfvars extension are terraform specific extensions.

  • In the variables file, terraform.tfvars, populate with the following information and update the variables to reflect your OpenStack installation:

node-count="2"
internal-ip-pool="private"
floating-ip-pool="public"
image-name="Ubuntu-14.04.2-LTS"
image-flavor="m1.small"
security-groups="default,k8s-cluster"
key-pair="spencer-key"
  • Now we want to create our Kubernetes master and nodes using the variables described above. Open 00-create-k8s-nodes.tf and add the following:
##Setup needed variables
variable "node-count" {}
variable "internal-ip-pool" {}
variable "floating-ip-pool" {}
variable "image-name" {}
variable "image-flavor" {}
variable "security-groups" {}
variable "key-pair" {}

##Create a single master node and floating IP
resource "openstack_compute_floatingip_v2" "master-ip" {
  pool = "${var.floating-ip-pool}"
}

resource "openstack_compute_instance_v2" "k8s-master" {
  name = "k8s-master"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${openstack_compute_floatingip_v2.master-ip.address}"
}

##Create desired number of k8s nodes and floating IPs
resource "openstack_compute_floatingip_v2" "node-ip" {
  pool = "${var.floating-ip-pool}"
  count = "${var.node-count}"
}

resource "openstack_compute_instance_v2" "k8s-node" {
  count = "${var.node-count}"
  name = "k8s-node-${count.index}"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${element(openstack_compute_floatingip_v2.node-ip.*.address, count.index)}"
}
  • Now, with what we have here, our infrastructure is provisioned on OpenStack. However, we want to get the information about our infrastructure into the Kargo playbooks to use as its Ansible inventory. Add the following to 01-create-inventory.tf:
resource "null_resource" "ansible-provision" {

  depends_on = ["openstack_compute_instance_v2.k8s-master","openstack_compute_instance_v2.k8s-node"]

  ##Create Masters Inventory
  provisioner "local-exec" {
    command =  "echo \"[kube-master]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" > kargo/inventory/inventory"
  }

  ##Create ETCD Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[etcd]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" >> kargo/inventory/inventory"
  }

  ##Create Nodes Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[kube-node]\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", openstack_compute_instance_v2.k8s-node.*.name, openstack_compute_floatingip_v2.node-ip.*.address))}\" >> kargo/inventory/inventory"
  }

  provisioner "local-exec" {
    command =  "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> kargo/inventory/inventory"
  }
}

This template certainly looks a little confusing, but what is happening is that Terraform is taking the information for the created Kubernetes masters and nodes and outputting the hostnames and IP addresses into the Ansible inventory format at a local path of ./kargo/inventory/inventory. A sample output looks like:

[kube-master]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[etcd]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[kube-node]
k8s-node-0 ansible_ssh_host=xxx.xxx.xxx.xxx
k8s-node-1 ansible_ssh_host=xxx.xxx.xxx.xxx

[k8s-cluster:children]
kube-node
kube-master

Setup OpenStack

You may have noticed in the Terraform section that we attached a k8s-cluster security group in our variables file. You will need to set this security group up to allow for the necessary ports used by Kubernetes. Follow this list and enter them into Horizon.

Hold On To Your Butts!

Now that Terraform is setup, we should be able to launch our cluster and have it provision using the Kargo playbooks we checked out. But first, one small BASH script to ensure things run in the proper order.

  • Create a file called cluster-up.sh and open it for editing. Paste the following:
#!/bin/bash

##Create infrastructure and inventory file
echo "Creating infrastructure"
terraform apply

##Run Ansible playbooks
echo "Quick sleep while instances spin up"
sleep 120
echo "Ansible provisioning"
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i kargo/inventory/inventory -u ubuntu -b kargo/cluster.yml

You’ll notice I included a two minute sleep to take care of some of the time when the nodes created by Terraform weren’t quite ready for an SSH session when Ansible started reaching out to them. Finally, update the -u flag in the ansible-playbook command to the user that has SSH access to the OpenStack instances you have created. I used ubuntu because that’s the default SSH user for Ubuntu cloud images.

  • Source your OpenStack credentials file with source /path/to/credfile.sh

  • Launch the cluster with ./cluster-up.sh. The Ansible deployment will take quite a bit of time as the necessary packages are downloaded and setup.

  • Assuming all goes as planned, SSH into your Kubernetes master and issue kubectl get-nodes:

ubuntu@k8s-master:~$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-0   Ready     1m
k8s-node-1   Ready     1m

Today, we’ll go on yet another trip through container land. I’ve found myself itching to throw anything and everything that I may run on a server into a container. It is so easy to create a Dockerfile, build my image, then deploy it wherever I want with Docker Machine that I tend to do it first thing when I have something new to run. Being able to write it once and have it run anywhere really is powerful stuff.

So that said, I was reading a bit the other day about grid computing. You may have heard of some of the interesting grid projects like SETI@Home. The idea here is that if you have a machine that often sits idle, you can donate your CPU (and/or GPU) to do some number crunching for the cause. I was thinking to myself upon reading this that I have some VMs that sit around idle for quite a bit of time unless I’m actively prototyping something, so why not give this a shot? I was also surprised to learn that a lot of these projects have standardized on using the same software suite, called BOINC, and you just associate to the project of your choice at launch time for the application. Sounds like a nice idea for a container!

##Choose Project##

  • Pick which projects you’re interested in by starting at the project page of BOINC’s website. I picked SETI@Home, Rosetta, and World Community Grid.

  • You’ll need to create accounts on all of the projects that you are interested in. Once completed, take note of the account keys for each. We will need them later.

##Create A Docker Image##

  • Create a new Dockerfile in a directory.

  • Initialize our Dockerfile by adding a FROM and MAINTAINER section. I picked a Ubuntu 14.04 base image to build off of.

FROM ubuntu:14.04
MAINTAINER Spencer Smith <robertspencersmith@gmail.com>
  • Next, we will install the BOINC client. Luckily, it is included in Ubuntu’s repos, so it isn’t very difficult.
##Install BOINC
RUN apt-get update && apt-get install -y boinc-client
  • We will then want to set our working directory to be that of the BOINC client’s lib directory. This will allow our commands to complete successfully next.
##Set working directory
WORKDIR /var/lib/boinc-client
  • We will now set the default command for our image. This command (which is admittedly a bit long) will start BOINC’s client service, sleep shortly, use the ‘boinccmd’ tool to attach a project, then tail out the stdout/stderr logs for the client. You may notice the ‘${boincurl}’ and ‘${boinckey}’ sections of the command. Those are environment variables that will point us to the project we wish to connect to. You will see these in use later when we launch our container.
##Run BOINC by default. Expects env vars for url and account key
CMD /etc/init.d/boinc-client start; sleep 5; /usr/bin/boinccmd --project_attach ${boincurl} ${boinckey}; tail -f /var/lib/boinc-client/std*.txt
  • That’s it for the Dockerfile. Save it and exit. Here’s the complete file:
FROM ubuntu:14.04
MAINTAINER Spencer Smith <robertspencersmith@gmail.com>

##Install BOINC
RUN apt-get update && apt-get install -y boinc-client

##Set working directory
WORKDIR /var/lib/boinc-client

##Run BOINC by default. Expects env vars for url and account key
CMD /etc/init.d/boinc-client start; sleep 5; /usr/bin/boinccmd --project_attach ${boincurl} ${boinckey}; tail -f /var/lib/boinc-client/std*.txt
  • We can now build our image by running docker build -t rsmitty/boinc . in the directory. Feel free to tag differently of course.

##Start Crunching!##

Now that we have an image to use, let’s launch some containers.

  • First, find your proper docker endpoint with docker-machine ls. I’ll be using my digital ocean docker host for this tutorial.
Spencers-MacBook-Pro:boinc spencer$ docker-machine ls
NAME      ACTIVE   DRIVER         STATE     URL                         SWARM
default            virtualbox     Stopped
do-dev             digitalocean   Running   tcp://REDACTED:2376
  • Set your docker environment variables to the proper values with
eval "$(docker-machine env do-dev)"
  • Launch a container, substituting your own desired values for boinckey and boincurl. You should be able to find these values from the account settings for the sites you registered earlier. Also feel free to name your container as you see fit.
docker run -ti -d --name wcg -e "boincurl=www.worldcommunitygrid.org" -e "boinckey=1234567890" rsmitty/boinc
  • Once launched, we can peek in on our jobs by either allocating a new TTY to the container with docker exec -ti wcg /bin/bash or docker logs wcg
Spencers-MacBook-Pro:boinc spencer$ docker logs wcg
 * Starting BOINC core client: boinc                                     [ OK ]
 * Setting up scheduling for BOINC core client and children:

....

29-Aug-2015 15:48:49 [World Community Grid] Started download of 933fbd61802442e2861afa0b31aedcc6.pdbqt
29-Aug-2015 15:48:51 [World Community Grid] Finished download of 933fbd61802442e2861afa0b31aedcc6.pdbqt

That’s it! You can check in on your accomplishments for each project in your account settings. You can find my image for this tutorial in the Docker Hub. If you wish to learn more about the BOINC project itself, please visit their website.

As I’ve been learning more about the container ecosystem, I’ve come across the concept of JeOS (just enough operating system). The idea here is that you want to gain as much performance out of your Docker containers as possible, so you minimize the cruft from your host operating system. There are several different JeOS options, but today we’ll talk about RancherOS. RancherOS is a very small, ~20MB, OS that you can use as a Docker host. Everything in RancherOS runs inside of a container and Docker itself runs as pid 1. RancherOS ships as an ISO, so today, I’ll guide you through using the ISO to create a QCOW image for use in OpenStack.

##Setup KVM##

  • First, ensure you have a proper KVM environment setup. This can involve quite a bit of configuration between making sure virtualization is allowed in the BIOS, your CPU supports it, etc.. I followed these directions on a new Ubuntu machine and it worked just fine.

  • You can test that your KVM setup is working properly by issuing virsh list. That should return an empty list:

root@ubuntu:/home/rsmitty# virsh list
 Id    Name                           State
----------------------------------------------------
  • Finally, install the virt-install tool with sudo apt-get install virtinst.

##Install Packer##

  • Now, we want to use Packer to build our image so we need to download it and get it installed properly. You can find directions on setting up Packer here.

  • If you need an intro to Packer in general, I’ve written another guide that was published to the Solinea website. You can find that here.

##Create Our Templates##

Now that we are all set up, we need to create two files, a cloud-config.yml file that gets injected into RancherOS and a Packer template called kvm-rancheros.json that we’ll use to build our QCOW.

  • Create a file called cloud-config.yml with the following content. Be sure to modify the ssh public key with your own, so that it gets baked into the image. Unfortunately, there’s no injection of keys during boot in OpenStack for RancherOS, so take care to make sure you get the correct one in there. Here’s what my cloud-config.yml looked like:
#cloud-config

ssh_authorized_keys:
  - ssh-rsa ... spencer@Spencers-MacBook-Pro.local
  • There are lots of options for building images with Packer, so it can be a bit daunting at first. For our purposes, we will need to use the KVM builder directly and pass in the RancherOS ISO. Once booted, Packer will scp our cloud-config.yml file into the temporary instance and then issue the proper command to install RancherOS to disk. After this is complete, Packer will provide output on the location of the QCOW image. This image will simply be called “rancheros” and that path to it will be “$PWD/output_rancher/rancheros”. Here’s the full template:
{
  "builders":
  [
    {
      "type": "qemu",
      "iso_url": "https://releases.rancher.com/os/latest/rancheros.iso",
      "iso_checksum_type": "md5",
      "iso_checksum": "63b54370f8c5f8645d6088be15ab07b0",
      "output_directory": "output_rancheros",
      "ssh_wait_timeout": "30s",
      "shutdown_command": "sudo shutdown -h now",
      "disk_size": 1024,
      "format": "qcow2",
      "headless": true,
      "accelerator": "kvm",
      "ssh_username": "rancher",
      "ssh_password": "rancher",
      "ssh_port": 22,
      "ssh_wait_timeout": "90m",
      "vm_name": "rancheros",
      "net_device": "virtio-net",
      "disk_interface": "virtio",
      "boot_wait": "5s",
      "qemuargs": [
        [ "-m", "1024M" ]
      ]
    }
  ],
  "provisioners": [
  {
     "type": "file",
     "source": "cloud_config.yml",
     "destination": "/home/rancher/cloud_config.yml"
   },
  {
    "type": "shell",
    "inline": [
      "sleep 5",
      "sudo rancheros-install -f -c cloud_config.yml -d /dev/vda"
    ]
  }]
}

Notice the ‘-m’ flag in the qemuargs section. You MUST have at least 1GB of RAM to complete the install successfully.

##Build and Upload##

  • It’s now time to build our image. Packer will fetch and verify the RancherOS ISO for us and proceed to take care of all of the necessary commands. Issue packer build kvm-rancheros.json.

  • Once our build is complete, we’ll want to upload it to Glance. This image is ~40MB, so it shouldn’t take a terribly long time. You can issue the following command (after sourcing your OpenStack credentials):

glance image-create --name "RancherOS" \
--is-public false --disk-format qcow2 \
--container-format bare --file $PWD/output_rancher/rancheros

##Launch An Instance & Connect##

  • We can now spin up our RancherOS instance inside of OpenStack. Ensure that you have ports 22, 80, and 2376 allowed in the security group that you choose to use for your instance.

  • Once our instance has been created, we will use Docker Machine’s generic driver to connect to our launched instance. Again, because SSH keys aren’t injected into RancherOS, we can’t use the OpenStack driver for Docker Machine and have it launch the instance for us. Here is the command that I used for connecting Docker Machine, notice I passed the path to the SSH key I injected earlier.

docker-machine create -d generic --generic-ssh-user rancher \
--generic-ssh-key ~/.ssh/id_rsa --generic-ip-address 192.168.1.202 \
rancher-dev
  • Set rancher-dev as our Docker endpoint with the following:
eval "$(docker-machine env rancher-dev)"
  • You can ensure everything is connected with a combination of docker-machine ls and docker ps. I like to echo a dashed line to give some separation in my commands.
spencers-mbp:~ spencer$ docker-machine ls && echo "------------" && docker ps
NAME          ACTIVE   DRIVER         STATE     URL                         SWARM
rancher-dev   *        generic        Running   tcp://192.168.1.202:2376
vbox-dev               virtualbox     Running   tcp://192.168.99.100:2376
------------
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

##Grow The Root Volume & Deploy A Container##

  • Again, keep in mind that RancherOS doesn’t currently do some cloud-init functions like growing the root volume. We’ll need to launch a privileged container to do this. Issue the following:
docker run --privileged -i --rm ubuntu bash << EOF
apt-get update
apt-get install -y cloud-guest-utils parted
growpart /dev/vda 1
partprobe
resize2fs /dev/vda1
EOF

Credit to Darren Shepherd for this script mentioned here

  • Launch my test-webserver container by issuing the following:
docker run -d -p 80:80 rsmitty/test-webserver /usr/sbin/apache2ctl -D FOREGROUND
  • Check out the result!

This post is simply here to document how to remove untagged and exited images and containers.

##Exited Images## You can find previously exited images by using filters:

docker ps -f "status=exited"

This will return the full text for each exited container:

Spencers-MacBook-Pro:~ spencer$ docker ps -f "status=exited"
CONTAINER ID        IMAGE                                           COMMAND                CREATED             STATUS                      PORTS               NAMES
72b10aebd2b0        rsmitty/ostack                                                     "/bin/sh -c /bin/bas   9 hours ago         Exited (126) 9 hours ago                        distracted_wilson
aa6c9cfc662f        rsmitty/ostack                                                     "/bin/sh -c /bin/bas   9 hours ago         Exited (0) 9 hours ago                          sick_einstein
1bfce6a324b5        ubuntu:14.04                                                       "/bin/bash"            9 hours ago                                                         jovial_einstein

To remove all of them, you can nest a command similar to the one above inside the docker rm command:

docker rm $(docker ps -qf "status=exited")

Docker responds with a list of IDs that it deleted:

Spencers-MacBook-Pro:~ spencer$ docker rm $(docker ps -qf "status=exited")
72b10aebd2b0
aa6c9cfc662f
1bfce6a324b5
f7fd1c00837c

##Untagged Images## If you wish to clean up untagged instances you can find them with another filter command, similar to the one above:

docker images -f "dangling=true"

This will return a formatted list of the untagged images:

Spencers-MacBook-Pro:~ spencer$ docker images -f "dangling=true"
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
<none>              <none>              bfa68ad8ff4c        23 hours ago        457.1 MB
<none>              <none>              0043ceae2104        23 hours ago        457.1 MB
<none>              <none>              edff0ab07895        23 hours ago        421.3 MB
<none>              <none>              6ae539b22ab9        23 hours ago        421 MB

And again, nesting a variation of that command to actually do the cleanup:

docker rmi $(docker images -qf "dangling=true")

Now something interesting happens!

Spencers-MacBook-Pro:~ spencer$ docker rmi $(docker images -qf "dangling=true")
Error response from daemon: Conflict, cannot delete bfa68ad8ff4c because the running container 7e5c96166fcb is using it, stop it and use -f to force
Deleted: edff0ab0789548cf33db3589eae5cc93589e7aea379bc3383f58c00b71ebb8cb
Deleted: e619828bd6f049d81a1920b96634534044ab0bf8f1dd4e40d9daf82d9a5c80b6
Deleted: ac0a2e7c0897058649e9e31cd4a319ee08158646990a607a54a0492f27e6e275
Error: failed to remove images: [bfa68ad8ff4c]

If the images are in use by some container, you must first stop the container. You’ll have to resolve this in order to remove this image. This is a good thing though, it can keep you from blowing yourself up :)