This one’s a weird one. I was trying to figure out some interesting containers to build when I overheard someone at work expressing difficulty trying to install the OpenStack client CLIs onto his machine. I thought to myself, what if I could install these once and just push them to whatever environment I please? Or even share it with other folks to use? Here’s how you can do it:

##Create A Dockerfile## First I got the basics down for creating a Dockerfile to install the proper CLI packages. It was handy to simply boot a ubuntu:14.04 container and test these steps out manually first. That’s an easy one, just do:

docker run -ti ubuntu:14.04 /bin/bash

From that point I did a little trial and error to figure out the basics of installing pip, installing the openstack client packages inside of pip, and throwing in a few extra dependencies that I encountered. Here’s the full Docker file, we’ll talk about the script that gets added in next:

FROM ubuntu:14.04
MAINTAINER Spencer Smith <robertspencersmith@gmail.com>

##Install pip and necessary dependencies for clients
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python-dev\
    python-pip\
    libffi-dev\
    libssl-dev

##Install ssl patches for python 2.7, then install clients
RUN pip install six --upgrade\
    pyopenssl\
    ndg-httpsclient\
    pyasn1\
    python-ceilometerclient\
    python-cinderclient\
    python-glanceclient\
    python-heatclient\
    python-keystoneclient\
    python-neutronclient\
    python-novaclient\
    python-saharaclient\
    python-swiftclient\
    python-troveclient\
    python-openstackclient

##Upload our creds checker and set it as our entrypoint
ADD creds.sh /ostack/
CMD /ostack/creds.sh

##Deal With Credentials## After creating the Dockerfile, I wanted to make sure that I could create a container general enough for others to use if they wanted. As such, I wanted most of the normal OpenStack environment variables to be passed in through the command line arguments. To handle this, I wrote a quick bash script called creds.sh to add into the container.

The creds.sh file will just ensure that the OS_AUTH_URL, OS_REGION_NAME, OS_TENANT_NAME, and OS_USERNAME variables are present in the environment, then it will prompt the user for their password so that they don’t have to put it in plaintext inside the docker run command. Finally, once all the info is present, the script will simply launch a bash session for the user.

Here’s the full creds.sh script:

#!/bin/bash

##Ensure everything but password is passed in as env variable
for var in OS_AUTH_URL OS_REGION_NAME OS_TENANT_NAME OS_USERNAME; do
  if [[ -z ${!var} ]]; then
    echo "${var} is unset. Please pass as a env variable input!"
    exit 1
  fi
done

##Prompt for password input
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT

##Start a bash prompt
/bin/bash

##Build The Container## Now, we simply need to build our container. You can give this your own tag if you like. Here’s what my docker build command looked like:

docker build -t rsmitty/ostack .

Ensure you are in the same directory with your Dockerfile.

##Use It!##

We can now launch our container and use it to talk to an OpenStack cloud. Ensure that the proper environment variables are passed in using the -e flag. Here’s what my docker run command looks like:

docker run -ti -e OS_AUTH_URL=https://REDACTED_URL:5000/v2.0/ -e OS_REGION_NAME=RegionOne -e OS_TENANT_NAME=admin -e OS_USERNAME=spencer rsmitty/ostack

Once the run has put you in the bash prompt, you should be able to use your environment!

root@ecedc1a96a49:/# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 2032053f-9809-4c29-b8f2-731fef7a01db | available |     test     |  2   |    iscsi    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

This post will go into some detail about how to get started with docker-machine. Docker Machine is a really nice tool to aid in deploying docker hosts across any environment. It seems to be quickly becoming the standard for creating dev environments. Today, I’ll go through how to talk to OpenStack with Docker Machine and then deploy a quick container.

##Install Docker Machine##

I installed both Docker and Docker Machine with Homebrew. There are several other installation options that you can find in the “Installation” sections here and here.

For Homebrew, simply issue this in the terminal:

brew install docker-machine

##Prepare Openstack##

There’s a few things that we need to do on the OpenStack side in order to ensure that our machine can be created successfully. First, go ahead and source your keystone credentials. Docker Machine will use these environment variables if they are present. This keeps us from having to pass a bunch of authentication parameters later on.

Next, take a look at your OpenStack environment. We’ll need to gather up some IDs and ensure some things are setup properly. First, take a look at your security group that you plan to use and ensure that SSH access is allowed into it. It’s also important to note here that you’ll want to allow any ports that you plan to map into your containers. For me, I allowed ports 22 and 80 initially. Now, let’s gather some IDs. I needed to find the ssh user for my image type (Centos 7), the image ID, the flavor I wished to use, the floating-ip pool name, and finally the security group that I wanted to use.

##Create Our Machine##

We’re finally ready to create our machine. Using the IDs I found above, here is the (extremely verbose) command that I issued:

docker-machine create --driver openstack\
 --openstack-ssh-user centos\
 --openstack-image-id cfb0a24a-16a5-4d19-a15b-ee29c9375d52\
 --openstack-flavor-name m1.small\
 --openstack-floatingip-pool public\
 --openstack-sec-groups default\
 docker-dev

Be patient here. I found that creating the machine took quite a while, as the docker-machine command will SSH into the instance and do some long-running tasks like ‘yum upgrade’.

Once complete, we’ll want to override our built in docker settings to point to our new machine. We can do that by issuing:

eval "$(docker-machine env docker-dev)"

Finally, we’ll want to ensure that our machine is totally up to date by issuing the following:

docker-machine upgrade docker-dev

##Write A Test Container##

Now that we have a working Docker Machine in OpenStack, let’s try deploying something fun to it. First, we’ll create a Dockerfile to simply install Apache and push a little image and a webpage.

In a test directory, I created three files: Dockerfile, index.html, and logo.png. Here’s the contents of each file:

Dockerfile:

FROM ubuntu:14.04
MAINTAINER Spencer Smith <robertspencersmith@gmail.com>
RUN apt-get update
RUN apt-get install -y apache2
ADD index.html /var/www/html/index.html
ADD logo.png /var/www/html/logo.png
RUN chmod 777 /var/www/html/logo.png

index.html:

<html>
<img src="logo.png" width="300" height="300"/>
<h3>Hello, World!</h3>
</html>

logo.png:

Finally, we’ll build our container image. Change into the directory that contains the files we just created and issue docker build. I’m also supplying a tag so that I can easily identify my apache container that I’m building. The docker build command can take a little while to complete, as there’s a lot happening with the update and installation of apache2.

docker build -t rsmitty/apache .

##Test It Out##

Now that our image has been created, it’s time to test it out by launching our new container in our machine. We can do that simply by calling the docker run command. Note that we will launch apache in the foreground so that it continues running and keeps our container up.

docker run -d -p 80:80 rsmitty/apache /usr/sbin/apache2ctl -D FOREGROUND

Point your browser to the IP address of our machine and see the results!

Today, I’m going to detail my steps for installing Docker. Docker is an extension of Linux Containers (LXC) and aims to provide an easier to use environment. This will just be a basic install guide and I will write another post soon, once I figure out how to do some more interesting stuff.

Docker and LXC are interesting because you can run several isolated containers directly in userspace on a Linux host. One of the big advantages here is that no hypervisor is required and you don’t need a guest OS like with VMs. This means that containers can be created scarily fast and should be more performant than their VM counterparts. I’ve seen some debate about whether or not containers are as secure as plain VMs, but truthfully haven’t delved too deeply into the details around this. Docker is a project I’ve been following at a high level for a while because of the potential to hook it into Openstack, but I’m just now getting around to actually putting my hands on it.

##Setup a Host## Setting up a host for your Docker containers is pretty easy. Docker is able to run on pretty much any environment. I’m going to use Vagrant CentOS 6.5 box, but you can find other install instructions here.

  • Docker is part of the EPEL repo, so let’s install that with:
sudo yum -y install epel-release
  • Once that’s complete, let’s update all or our packages. I found that I couldn’t start the Docker daemon without updating. There’s a device mapper package that has to be a newer version. After doing this, we can simply install Docker with:
sudo yum -y update
sudo yum -y install docker-io
  • Start the Docker daemon and configure it to run at boot:
sudo service docker start
sudo chkconfig docker on
  • Pull in the CentOS 6 base container. This may take a bit of time depending on your internet connection.
sudo docker pull centos:centos6
  • Now let’s test that it works by asking docker to run a command inside a container. The run command below will create a container, issue the echo command, then shut the container down.
sudo docker run centos:centos6 echo "Hola, Mundo!"

This post will detail how to host git repos on a server that you own. I’ll be covering how to set up your sever-side repo and then how to connect from a remote machine via SSH.

##Setup Our Server##

  • First and foremost, we’ll need to install git. This is going to depend on your package manager, but I’m using CentOS right now, so I’ll be issuing
sudo yum install -y git
  • Now we’ll need to add a user to our system for git. Let’s do that and then switch to that user with:
sudo useradd git
sudo su - git
  • Now that we are the git user, we can setup the SSH keys that we want to accept by making the authorized keys file and putting the public keys of each user we want to have access in this file. After creating this directory and file, we need to set the permissions on them properly or SSH will complain.
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
  • Add the desired public SSH keys in authorized keys. You can add several of these if you have a desire for several users to have access to this git repo. Just separate the keys by putting them on a new line. This should look something like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHytEnXVEiGKu6XVDh/evJhM5ANngMeRJiizr6jsiOmWyMtuuqtGi/84EDQ54OOwDlBfdC72YjPaYEafyez8fYls7M2L82P2Ka96hFapUWwF9TzxAw1yEkV81Rv2OZWpAdf451UCZPClludtym0DyGwZdMGfVJx8ZNPJ61lwx5ijwWQvY4dhZF0Hjo431c9d1mgOLxu94WJ15PC6CjAI9zh/zddmJMHgClkqTuGWWf/t3e/SZ8AJ5ABUtcjPutUdJBGvPI814eD3+JgE18D6AiHN/uWm0JLYx5P06htqb2Eb6uAsCJjTIDyl+I0bOYRUp8PlYzJALv+x8RxP1R35Wr rsmitty@github.com

##Create Git Repo##

  • It’s time to finally create our git repo. Let’s create an easy directory called /git/ and a subdirectory under that for our test project. We need to switch back to our normal user (with sudo ability) to create a directory at the root. You can do that simply by issuing ‘exit’.
sudo mkdir -p /git/testproject.git
sudo chown -R git:git /git
  • Now, back as the git user, initialize the git repo by using the ‘git init’ command inside that directory:
cd /git/testproject.git
git init --bare

##Test It Out##

  • Back on your local machine, let’s verify that this is actually working for us. This should be as simple as doing a git clone to the proper path on the remote server:
git clone git@GITSERVERNAME:/git/testproject.git
  • Change into the local testproject directory and create a file for our first commit:
touch README.md
  • Let’s add, commit, and push the file up.
git add README.md
git commit -m "initial commit"
git push origin master

Now we’ve got a fully functional git repo with a master branch. All ready to go!

Today’s post will go into some detail on getting started with Rerun. Rerun is a tool that’s kind of meant to bridge the gap between having a bunch of sysadmin scripts and a full-blown configuration management tool. The truth is that a lot of times, groups have a bunch of bash scripts that can perform differently on different machines or exist in several different versions. This makes it hard to ensure that you’re always using the right one, the right flags are being passed, etc., etc. Rerun sets out to help wrangle your shell scripts and present them as something super easy to use.

##Install Rerun##

  • Installing Rerun is really just a ‘git clone’ and then adding a bunch of variables to your .bash_profile. I rolled it all into a script so it can just be run (at your own risk). Just issue chmod +x whatever_you_name.sh, followed by ./whatever_you_name.sh.
#!/bin/bash

##Checkout Rerun to home directory
cd $HOME
git clone git://github.com/rerun/rerun.git

##Append rerun particulars to user profile
cat << EOF >> $HOME/.bash_profile
##Begin vars for rerun
export PATH=$PATH:$HOME/rerun
export RERUN_MODULES=$HOME/rerun/modules
[ -r $HOME/rerun/etc/bash_completion.sh ] && source $HOME/rerun/etc/bash_completion.sh
[ -t 0 ] && export RERUN_COLOR=true
##End vars for rerun
EOF
  • Exit the terminal and restart, then issue rerun to see if it’s working. This should give you a list of the modules installed:
Spencers-MBP:~ spencer$ rerun
Available modules:
  stubbs: "Simple rerun module builder" - 1.2.2

##Create a Module & Command## Now let’s run through the Rerun tutorial. A lot of this part of the post will be a rehashing of that page, with some differences here and there to keep myself from just copying/pasting and not actually committing this to memory. We will be creating a waitfor module that simply waits for a variety of different conditions like ping to be available at a given address, a file to exist, etc..

  • Rerun uses a module:command type syntax, where module is kind of the general idea of what you’re trying to do, while command is the specifics. So, let’s use the stubbs module’s add-module command to create the bones for our waitfor module:
rerun stubbs:add-module --module waitfor --description "waits for a condition."
  • Okay, now let’s add a ping command to our waitfor module with
rerun stubbs:add-command --module waitfor --command ping --description "wait for ping response from address"

Note that this command creates both a script and a test.sh file. script is what will actually get run, the test file is for us to write a test plan.

  • For ping, we’ll want to add a host and an interval option. Host will be required, while we will set the interval option with a default and make overriding that optional.
  • Set the required host option:
rerun stubbs:add-option --option host --description "host to ping" --module waitfor --command ping --required true --export false --default '""'
  • Set the optional interval option:
rerun stubbs:add-option --option interval --description "how long to wait between attempts" --module waitfor --command ping --required false --export false --default 30
  • Let’s make sure our params look right by checking the output with rerun waitfor. Rerun gives a pretty easy to read/understand output when you try to figure out what a module is capable of.
Spencers-MBP:~ spencer$ rerun waitfor
Available commands in module, "waitfor":
ping: "wait for ping response from address"
    --host <"">: "host to ping"
   [ --interval <30>]: "how long to wait between attempts"

##Implement the Command## So now we’ve got our command created, but it doesn’t actually do anything. Rerun can’t read our mind, so it just lays down some basics and it’s up to us to implement the particulars.

  • Open the file ~/rerun/modules/waitfor/commands/ping/script for editing. Scroll down to the bottom, where you will see:
# Command implementation
# ----------------------

# - - -
# Put the command implementation here.
# - - -
  • Replace the ‘Put the command implementation here’ with your code. I had to throw in a -t flag in the ping command to timeout quicker on Mac. For our ping check, the code will look like:
## Loop until a single ping packet returns a result string that contains 64.
## 64 is the number of bytes in ping response
until ( ping -c 1 -t 1 $HOST | grep -q ^64 )
do
   ##Sleep by our interval if unsuccessful
   sleep $INTERVAL
   echo Pinging $HOST...
done

##Finally return when ping available
echo "OK: $HOST is pingable."
  • Test it out with a call to localhost. This should always return a positive ping. rerun waitfor:ping --host localhost --interval 1
Spencers-MBP:~ spencer$ rerun waitfor:ping --host localhost --interval 1
OK: localhost is pingable.

##Write Tests## Okay, let’s write the tests for our new command. This will help us ensure it’s working the right way.

  • Open ~/rerun/modules/waitfor/tests/ping-1-test.sh for editing. Remove the whole ‘it_fails_without_a_real_test’ block.

  • We’ll create two new functions. One will check that the required host is present. The other will check that localhost responds as expected. These tests are straight from the wiki tutorial with extra comments to explain what’s actually happening.

##Check that the required host is passed in
it_fails_without_required_options() {
    ##Make a temp file to write to
    OUT=$(mktemp /tmp/waitfor:ping-XXXX)
    ##Negate the error of not passing a host with '!'. Write results to outfile.
    ##The '2>' param redirects stderr to the outfile
    ! rerun waitfor:ping 2> $OUT
    ##Check that missing text is in outfile
    grep 'missing required option: --host' $OUT
    ##Delete outfile
    rm $OUT
}
##Check that command works for localhost
it_reaches_localhost() {
    ##Make a temp file to write to
    OUT=$(mktemp /tmp/waitfor:ping-XXXX)
    ##Run with localhost passed as host param
    rerun waitfor:ping --host localhost > $OUT
    ##Ensure proper output is present in outfile
    grep 'OK: localhost is pingable.' $OUT
    ##Delete outfile
    rm $OUT
}
  • Finally, let’s check that the output of the stubbs:test command to make sure our tests pass. Issue rerun stubbs:test --module waitfor --plan ping
Spencers-MBP:~ spencer$ rerun stubbs:test --module waitfor --plan ping
=========================================================
 TESTING MODULE: waitfor
=========================================================
ping
  it_fails_without_required_options:               [PASS]
  it_reaches_localhost:                            [PASS]
=========================================================
Tests:    2 | Passed:   2 | Failed:   0

##Extend, Extend, Extend##

Now that we have learned all of the functionality from the official tutorial, it’s time to extend our module to do other things. Consider what the ‘waitfor’ module is for. It is there to wait on things in general, not just ping responses. So let’s extend our module to support another wait use case, waiting for a file to exist.

  • First let’s add the new command to our module. This is as simple as it was earlier, just pass the proper options as needed:
rerun stubbs: add-command --module waitfor --command file --description "Waits for a file to be present on the system"
  • Add options for the filepath we want to check, as well as the interval we want to wait to check:
rerun stubbs: add-option --option filepath --description "full path of file to wait for" --module waitfor --command file --required true --export false --default '""'
rerun stubbs: add-option --option interval --description "how long to wait between attempts" --module waitfor --command file --required false --export false --default 30
  • Time to implement the actual logic behind our file checker. You’ll notice that since this command is similar in function to our ping command, a lot of the same logic that we used previously still applies. Here’s the relevant bash from ‘waitfor/commands/file/script’:
until [ -f "$FILEPATH" ]
do
 ##Sleep by our interval if unsuccessful
 sleep $INTERVAL
 echo "Checking for file at $FILEPATH"
done

##Finally return when file exists
echo "OK: $FILEPATH now exists." 
  • We can now see this in action by issuing our command, waiting for a few cycles to occur, then touching the file that we want to exist in another terminal. For me, the touch command was simply touch /tmp/test.txt.
Spencers-MBP:~ spencer$ rerun waitfor: file --filepath "/tmp/test.txt" --interval 1
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
OK: /tmp/test.txt now exists.
  • Finally, we would want to write some tests around this command to ensure it functions as expected when variables are missing, etc.. This post is getting pretty lengthy, so I will leave that task up to you.

And that’s it! I hope you enjoyed this intro to Rerun. It’s a really fun tool to use once you pick up the basics, and it really makes it dead simple to allow other team mates (even those who may not be very adept with bash) to execute scripts in a known, repeatable manner.