Today, I’m going to detail my steps for installing Docker. Docker is an extension of Linux Containers (LXC) and aims to provide an easier to use environment. This will just be a basic install guide and I will write another post soon, once I figure out how to do some more interesting stuff.
Docker and LXC are interesting because you can run several isolated containers directly in userspace on a Linux host. One of the big advantages here is that no hypervisor is required and you don’t need a guest OS like with VMs. This means that containers can be created scarily fast and should be more performant than their VM counterparts. I’ve seen some debate about whether or not containers are as secure as plain VMs, but truthfully haven’t delved too deeply into the details around this. Docker is a project I’ve been following at a high level for a while because of the potential to hook it into Openstack, but I’m just now getting around to actually putting my hands on it.
##Setup a Host##
Setting up a host for your Docker containers is pretty easy. Docker is able to run on pretty much any environment. I’m going to use Vagrant CentOS 6.5 box, but you can find other install instructions here.
Docker is part of the EPEL repo, so let’s install that with:
Once that’s complete, let’s update all or our packages. I found that I couldn’t start the Docker daemon without updating. There’s a device mapper package that has to be a newer version. After doing this, we can simply install Docker with:
Start the Docker daemon and configure it to run at boot:
Pull in the CentOS 6 base container. This may take a bit of time depending on your internet connection.
Now let’s test that it works by asking docker to run a command inside a container. The run command below will create a container, issue the echo command, then shut the container down.
This post will detail how to host git repos on a server that you own. I’ll be covering how to set up your sever-side repo and then how to connect from a remote machine via SSH.
##Setup Our Server##
First and foremost, we’ll need to install git. This is going to depend on your package manager, but I’m using CentOS right now, so I’ll be issuing
Now we’ll need to add a user to our system for git. Let’s do that and then switch to that user with:
Now that we are the git user, we can setup the SSH keys that we want to accept by making the authorized keys file and putting the public keys of each user we want to have access in this file. After creating this directory and file, we need to set the permissions on them properly or SSH will complain.
Add the desired public SSH keys in authorized keys. You can add several of these if you have a desire for several users to have access to this git repo. Just separate the keys by putting them on a new line. This should look something like:
##Create Git Repo##
It’s time to finally create our git repo. Let’s create an easy directory called /git/ and a subdirectory under that for our test project. We need to switch back to our normal user (with sudo ability) to create a directory at the root. You can do that simply by issuing ‘exit’.
Now, back as the git user, initialize the git repo by using the ‘git init’ command inside that directory:
##Test It Out##
Back on your local machine, let’s verify that this is actually working for us. This should be as simple as doing a git clone to the proper path on the remote server:
Change into the local testproject directory and create a file for our first commit:
Let’s add, commit, and push the file up.
Now we’ve got a fully functional git repo with a master branch. All ready to go!
Today’s post will go into some detail on getting started with
Rerun is a tool that’s kind of meant to bridge the gap between having a
bunch of sysadmin scripts and a full-blown configuration management tool.
The truth is that a lot of times, groups have a bunch of bash scripts
that can perform differently on different machines or exist in several different
versions. This makes it hard to ensure that you’re always using the right one,
the right flags are being passed, etc., etc. Rerun sets out to help wrangle your
shell scripts and present them as something super easy to use.
Installing Rerun is really just a ‘git clone’ and then adding a bunch of
variables to your .bash_profile. I rolled it all into a script so it can just be
run (at your own risk). Just issue chmod +x whatever_you_name.sh,
followed by ./whatever_you_name.sh.
Exit the terminal and restart, then issue rerun to see if it’s working.
This should give you a list of the modules installed:
##Create a Module & Command##
Now let’s run through the Rerun tutorial.
A lot of this part of the post will be a rehashing of that page, with some differences
here and there to keep myself from just copying/pasting and not actually committing this
to memory. We will be creating a waitfor module that simply waits for a variety of different
conditions like ping to be available at a given address, a file to exist, etc..
Rerun uses a module:command type syntax, where module is kind of the general idea
of what you’re trying to do, while command is the specifics. So, let’s use the stubbs
module’s add-module command to create the bones for our waitfor module:
Okay, now let’s add a ping command to our waitfor module with
Note that this command creates both a script and a test.sh file. script is what
will actually get run, the test file is for us to write a test plan.
For ping, we’ll want to add a host and an interval option. Host will
be required, while we will set the interval option with a default and make overriding
Set the required host option:
Set the optional interval option:
Let’s make sure our params look right by checking the output with rerun waitfor.
Rerun gives a pretty easy to read/understand output when you try to figure out what
a module is capable of.
##Implement the Command##
So now we’ve got our command created, but it doesn’t actually do anything. Rerun
can’t read our mind, so it just lays down some basics and it’s up to us to implement
Open the file ~/rerun/modules/waitfor/commands/ping/script for editing.
Scroll down to the bottom, where you will see:
Replace the ‘Put the command implementation here’ with your code. I had to throw
in a -t flag in the ping command to timeout quicker on Mac.
For our ping check, the code will look like:
Test it out with a call to localhost. This should always return a positive ping.
rerun waitfor:ping --host localhost --interval 1
Okay, let’s write the tests for our new command. This will help us ensure it’s working
the right way.
Open ~/rerun/modules/waitfor/tests/ping-1-test.sh for editing. Remove the whole
We’ll create two new functions. One will check that the required host is present.
The other will check that localhost responds as expected. These tests are straight
from the wiki tutorial with extra comments to explain what’s actually happening.
Finally, let’s check that the output of the stubbs:test command to make sure
our tests pass. Issue rerun stubbs:test --module waitfor --plan ping
##Extend, Extend, Extend##
Now that we have learned all of the functionality from the official tutorial, it’s time to extend our module to do other things. Consider what the ‘waitfor’ module is for. It is there to wait on things in general, not just ping responses. So let’s extend our module to support another wait use case, waiting for a file to exist.
First let’s add the new command to our module. This is as simple as it was earlier, just pass the proper options as needed:
Add options for the filepath we want to check, as well as the interval we want to wait to check:
Time to implement the actual logic behind our file checker. You’ll notice that since this command is similar in function to our ping command, a lot of the same logic that we used previously still applies. Here’s the relevant bash from ‘waitfor/commands/file/script’:
We can now see this in action by issuing our command, waiting for a few cycles to occur, then touching the file that we want to exist in another terminal. For me, the touch command was simply touch /tmp/test.txt.
Finally, we would want to write some tests around this command to ensure it functions as expected when variables are missing, etc.. This post is getting pretty lengthy, so I will leave that task up to you.
And that’s it! I hope you enjoyed this intro to Rerun. It’s a really fun tool to use once you pick up the basics, and it really makes it dead simple to allow other team mates (even those who may not be very adept with bash) to execute scripts in a known, repeatable manner.
Continuing on my thread of exploring new technologies for my new job, today I’ll
be looking at CFEngine and how we can use it for configuration management. I’ve
used other tools like Chef and Ansible in the past, but CFEngine is a new one
for me. I’ll be installing and configuring a server and some nodes in my home
##Setup the Server##
I’m going to use the instructions for CFEngine enterprise for this tutorial. It
appears to be free for the first 25 nodes, so it will be nice to test against the
version that I may actually have to use at work.
Create a server in Openstack and go ahead and SSH in. I had to use a Ubuntu 12.04
LTS image for this. 14.04 LTS returned an error about not being supported. I imagine
that will be fixed in the future.
Open the /etc/hosts file for editing and add an entry for the private IP address to
give it a hostname. The script below with fail if hostname -f doesn’t return
anything. I added this to my hosts file:
10.0.0.29 cfengine-server.localdomain. You may also have to enter
sudo hostname cfengine-server.localdomain.
Grab the CFEngine install script with
Make it executable with chmod +x quick-install-cfengine-enterprise.sh.
Run the script with sudo rights and pass the hub argument to specify that this
will be a central hub server:
sudo ./quick-install-cfengine-enterprise.sh hub
Bootstrap the CFEngine hub with sudo /var/cfengine/bin/cf-agent --bootstrap 10.0.0.29
We should now be able to login to the server’s web UI by going to the floating
IP address in a browser. The default login information is admin/admin. Make sure
your default security group lets port 80 in.
##Setup the Clients##
Now let’s get some clients set up so that we have some systems to actually manage
with our snazzy new server. This process is almost exactly the same as the above,
with the exception of the argument passed to quick-install-cfengine-enterprise.sh.
I won’t copy/paste everything from above, but just follow the same steps and when
you get there, issue this command instead:
sudo ./quick-install-cfengine-enterprise.sh agent
One last possible caveat here. I created a Ubuntu 12.04 image with the CFEngine
client installed and it caused a kernel panic on boot. I’m not sure what was going on,
but using a 14.04 image worked just fine.
Once you get the client setup completed, you should see your new nodes checked in
in the web UI.
As I was trying to write an ISO to a USB drive, I wanted to see the progress when
using the ‘dd’ command line tool. I found a quick pointer on StackOverflow to use
the ‘pv’ command, so I adapted a little to use on a Mac. This will also serve
as a guide on how to write ISOs on Mac. Here’s how:
Install pv with homebrew: brew install pv
Find your USB drive with diskutil list. Should be pretty easy to spot the
USB drive as it will be smaller than the other disks. Tread lightly though, don’t
mess with your hard drive. I’ll use /dev/disk3, as that’s what my command returned.
Unmount it with diskutil unmountDisk /dev/disk3.
Become root with sudo su
Write your iso with this general layout, substituting paths where necessary:
dd if=/path/to/your.iso | pv | dd of=/dev/disk3 bs=1024k.