This post will detail how to host git repos on a server that you own. I’ll be covering how to set up your sever-side repo and then how to connect from a remote machine via SSH.

##Setup Our Server##

  • First and foremost, we’ll need to install git. This is going to depend on your package manager, but I’m using CentOS right now, so I’ll be issuing
sudo yum install -y git
  • Now we’ll need to add a user to our system for git. Let’s do that and then switch to that user with:
sudo useradd git
sudo su - git
  • Now that we are the git user, we can setup the SSH keys that we want to accept by making the authorized keys file and putting the public keys of each user we want to have access in this file. After creating this directory and file, we need to set the permissions on them properly or SSH will complain.
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
  • Add the desired public SSH keys in authorized keys. You can add several of these if you have a desire for several users to have access to this git repo. Just separate the keys by putting them on a new line. This should look something like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHytEnXVEiGKu6XVDh/evJhM5ANngMeRJiizr6jsiOmWyMtuuqtGi/84EDQ54OOwDlBfdC72YjPaYEafyez8fYls7M2L82P2Ka96hFapUWwF9TzxAw1yEkV81Rv2OZWpAdf451UCZPClludtym0DyGwZdMGfVJx8ZNPJ61lwx5ijwWQvY4dhZF0Hjo431c9d1mgOLxu94WJ15PC6CjAI9zh/zddmJMHgClkqTuGWWf/t3e/SZ8AJ5ABUtcjPutUdJBGvPI814eD3+JgE18D6AiHN/uWm0JLYx5P06htqb2Eb6uAsCJjTIDyl+I0bOYRUp8PlYzJALv+x8RxP1R35Wr rsmitty@github.com

##Create Git Repo##

  • It’s time to finally create our git repo. Let’s create an easy directory called /git/ and a subdirectory under that for our test project. We need to switch back to our normal user (with sudo ability) to create a directory at the root. You can do that simply by issuing ‘exit’.
sudo mkdir -p /git/testproject.git
sudo chown -R git:git /git
  • Now, back as the git user, initialize the git repo by using the ‘git init’ command inside that directory:
cd /git/testproject.git
git init --bare

##Test It Out##

  • Back on your local machine, let’s verify that this is actually working for us. This should be as simple as doing a git clone to the proper path on the remote server:
git clone git@GITSERVERNAME:/git/testproject.git
  • Change into the local testproject directory and create a file for our first commit:
touch README.md
  • Let’s add, commit, and push the file up.
git add README.md
git commit -m "initial commit"
git push origin master

Now we’ve got a fully functional git repo with a master branch. All ready to go!

Today’s post will go into some detail on getting started with Rerun. Rerun is a tool that’s kind of meant to bridge the gap between having a bunch of sysadmin scripts and a full-blown configuration management tool. The truth is that a lot of times, groups have a bunch of bash scripts that can perform differently on different machines or exist in several different versions. This makes it hard to ensure that you’re always using the right one, the right flags are being passed, etc., etc. Rerun sets out to help wrangle your shell scripts and present them as something super easy to use.

##Install Rerun##

  • Installing Rerun is really just a ‘git clone’ and then adding a bunch of variables to your .bash_profile. I rolled it all into a script so it can just be run (at your own risk). Just issue chmod +x whatever_you_name.sh, followed by ./whatever_you_name.sh.
#!/bin/bash

##Checkout Rerun to home directory
cd $HOME
git clone git://github.com/rerun/rerun.git

##Append rerun particulars to user profile
cat << EOF >> $HOME/.bash_profile
##Begin vars for rerun
export PATH=$PATH:$HOME/rerun
export RERUN_MODULES=$HOME/rerun/modules
[ -r $HOME/rerun/etc/bash_completion.sh ] && source $HOME/rerun/etc/bash_completion.sh
[ -t 0 ] && export RERUN_COLOR=true
##End vars for rerun
EOF
  • Exit the terminal and restart, then issue rerun to see if it’s working. This should give you a list of the modules installed:
Spencers-MBP:~ spencer$ rerun
Available modules:
  stubbs: "Simple rerun module builder" - 1.2.2

##Create a Module & Command## Now let’s run through the Rerun tutorial. A lot of this part of the post will be a rehashing of that page, with some differences here and there to keep myself from just copying/pasting and not actually committing this to memory. We will be creating a waitfor module that simply waits for a variety of different conditions like ping to be available at a given address, a file to exist, etc..

  • Rerun uses a module:command type syntax, where module is kind of the general idea of what you’re trying to do, while command is the specifics. So, let’s use the stubbs module’s add-module command to create the bones for our waitfor module:
rerun stubbs:add-module --module waitfor --description "waits for a condition."
  • Okay, now let’s add a ping command to our waitfor module with
rerun stubbs:add-command --module waitfor --command ping --description "wait for ping response from address"

Note that this command creates both a script and a test.sh file. script is what will actually get run, the test file is for us to write a test plan.

  • For ping, we’ll want to add a host and an interval option. Host will be required, while we will set the interval option with a default and make overriding that optional.
  • Set the required host option:
rerun stubbs:add-option --option host --description "host to ping" --module waitfor --command ping --required true --export false --default '""'
  • Set the optional interval option:
rerun stubbs:add-option --option interval --description "how long to wait between attempts" --module waitfor --command ping --required false --export false --default 30
  • Let’s make sure our params look right by checking the output with rerun waitfor. Rerun gives a pretty easy to read/understand output when you try to figure out what a module is capable of.
Spencers-MBP:~ spencer$ rerun waitfor
Available commands in module, "waitfor":
ping: "wait for ping response from address"
    --host <"">: "host to ping"
   [ --interval <30>]: "how long to wait between attempts"

##Implement the Command## So now we’ve got our command created, but it doesn’t actually do anything. Rerun can’t read our mind, so it just lays down some basics and it’s up to us to implement the particulars.

  • Open the file ~/rerun/modules/waitfor/commands/ping/script for editing. Scroll down to the bottom, where you will see:
# Command implementation
# ----------------------

# - - -
# Put the command implementation here.
# - - -
  • Replace the ‘Put the command implementation here’ with your code. I had to throw in a -t flag in the ping command to timeout quicker on Mac. For our ping check, the code will look like:
## Loop until a single ping packet returns a result string that contains 64.
## 64 is the number of bytes in ping response
until ( ping -c 1 -t 1 $HOST | grep -q ^64 )
do
   ##Sleep by our interval if unsuccessful
   sleep $INTERVAL
   echo Pinging $HOST...
done

##Finally return when ping available
echo "OK: $HOST is pingable."
  • Test it out with a call to localhost. This should always return a positive ping. rerun waitfor:ping --host localhost --interval 1
Spencers-MBP:~ spencer$ rerun waitfor:ping --host localhost --interval 1
OK: localhost is pingable.

##Write Tests## Okay, let’s write the tests for our new command. This will help us ensure it’s working the right way.

  • Open ~/rerun/modules/waitfor/tests/ping-1-test.sh for editing. Remove the whole ‘it_fails_without_a_real_test’ block.

  • We’ll create two new functions. One will check that the required host is present. The other will check that localhost responds as expected. These tests are straight from the wiki tutorial with extra comments to explain what’s actually happening.

##Check that the required host is passed in
it_fails_without_required_options() {
    ##Make a temp file to write to
    OUT=$(mktemp /tmp/waitfor:ping-XXXX)
    ##Negate the error of not passing a host with '!'. Write results to outfile.
    ##The '2>' param redirects stderr to the outfile
    ! rerun waitfor:ping 2> $OUT
    ##Check that missing text is in outfile
    grep 'missing required option: --host' $OUT
    ##Delete outfile
    rm $OUT
}
##Check that command works for localhost
it_reaches_localhost() {
    ##Make a temp file to write to
    OUT=$(mktemp /tmp/waitfor:ping-XXXX)
    ##Run with localhost passed as host param
    rerun waitfor:ping --host localhost > $OUT
    ##Ensure proper output is present in outfile
    grep 'OK: localhost is pingable.' $OUT
    ##Delete outfile
    rm $OUT
}
  • Finally, let’s check that the output of the stubbs:test command to make sure our tests pass. Issue rerun stubbs:test --module waitfor --plan ping
Spencers-MBP:~ spencer$ rerun stubbs:test --module waitfor --plan ping
=========================================================
 TESTING MODULE: waitfor
=========================================================
ping
  it_fails_without_required_options:               [PASS]
  it_reaches_localhost:                            [PASS]
=========================================================
Tests:    2 | Passed:   2 | Failed:   0

##Extend, Extend, Extend##

Now that we have learned all of the functionality from the official tutorial, it’s time to extend our module to do other things. Consider what the ‘waitfor’ module is for. It is there to wait on things in general, not just ping responses. So let’s extend our module to support another wait use case, waiting for a file to exist.

  • First let’s add the new command to our module. This is as simple as it was earlier, just pass the proper options as needed:
rerun stubbs: add-command --module waitfor --command file --description "Waits for a file to be present on the system"
  • Add options for the filepath we want to check, as well as the interval we want to wait to check:
rerun stubbs: add-option --option filepath --description "full path of file to wait for" --module waitfor --command file --required true --export false --default '""'
rerun stubbs: add-option --option interval --description "how long to wait between attempts" --module waitfor --command file --required false --export false --default 30
  • Time to implement the actual logic behind our file checker. You’ll notice that since this command is similar in function to our ping command, a lot of the same logic that we used previously still applies. Here’s the relevant bash from ‘waitfor/commands/file/script’:
until [ -f "$FILEPATH" ]
do
 ##Sleep by our interval if unsuccessful
 sleep $INTERVAL
 echo "Checking for file at $FILEPATH"
done

##Finally return when file exists
echo "OK: $FILEPATH now exists." 
  • We can now see this in action by issuing our command, waiting for a few cycles to occur, then touching the file that we want to exist in another terminal. For me, the touch command was simply touch /tmp/test.txt.
Spencers-MBP:~ spencer$ rerun waitfor: file --filepath "/tmp/test.txt" --interval 1
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
Checking for file at /tmp/test.txt
OK: /tmp/test.txt now exists.
  • Finally, we would want to write some tests around this command to ensure it functions as expected when variables are missing, etc.. This post is getting pretty lengthy, so I will leave that task up to you.

And that’s it! I hope you enjoyed this intro to Rerun. It’s a really fun tool to use once you pick up the basics, and it really makes it dead simple to allow other team mates (even those who may not be very adept with bash) to execute scripts in a known, repeatable manner.

Continuing on my thread of exploring new technologies for my new job, today I’ll be looking at CFEngine and how we can use it for configuration management. I’ve used other tools like Chef and Ansible in the past, but CFEngine is a new one for me. I’ll be installing and configuring a server and some nodes in my home Openstack lab.

##Setup the Server## I’m going to use the instructions for CFEngine enterprise for this tutorial. It appears to be free for the first 25 nodes, so it will be nice to test against the version that I may actually have to use at work.

  • Create a server in Openstack and go ahead and SSH in. I had to use a Ubuntu 12.04 LTS image for this. 14.04 LTS returned an error about not being supported. I imagine that will be fixed in the future.

  • Open the /etc/hosts file for editing and add an entry for the private IP address to give it a hostname. The script below with fail if hostname -f doesn’t return anything. I added this to my hosts file: 10.0.0.29 cfengine-server.localdomain. You may also have to enter sudo hostname cfengine-server.localdomain.

  • Grab the CFEngine install script with wget http://s3.amazonaws.com/cfengine.packages/quick-install-cfengine-enterprise.sh.

  • Make it executable with chmod +x quick-install-cfengine-enterprise.sh.

  • Run the script with sudo rights and pass the hub argument to specify that this will be a central hub server: sudo ./quick-install-cfengine-enterprise.sh hub

  • Bootstrap the CFEngine hub with sudo /var/cfengine/bin/cf-agent --bootstrap 10.0.0.29

  • We should now be able to login to the server’s web UI by going to the floating IP address in a browser. The default login information is admin/admin. Make sure your default security group lets port 80 in.

##Setup the Clients## Now let’s get some clients set up so that we have some systems to actually manage with our snazzy new server. This process is almost exactly the same as the above, with the exception of the argument passed to quick-install-cfengine-enterprise.sh. I won’t copy/paste everything from above, but just follow the same steps and when you get there, issue this command instead: sudo ./quick-install-cfengine-enterprise.sh agent

One last possible caveat here. I created a Ubuntu 12.04 image with the CFEngine client installed and it caused a kernel panic on boot. I’m not sure what was going on, but using a 14.04 image worked just fine.

Once you get the client setup completed, you should see your new nodes checked in in the web UI.

As I was trying to write an ISO to a USB drive, I wanted to see the progress when using the ‘dd’ command line tool. I found a quick pointer on StackOverflow to use the ‘pv’ command, so I adapted a little to use on a Mac. This will also serve as a guide on how to write ISOs on Mac. Here’s how:

  • Install pv with homebrew: brew install pv
  • Find your USB drive with diskutil list. Should be pretty easy to spot the USB drive as it will be smaller than the other disks. Tread lightly though, don’t mess with your hard drive. I’ll use /dev/disk3, as that’s what my command returned.
  • Unmount it with diskutil unmountDisk /dev/disk3.
  • Become root with sudo su
  • Write your iso with this general layout, substituting paths where necessary: dd if=/path/to/your.iso | pv | dd of=/dev/disk3 bs=1024k.

As part of a new job I’m taking, I wanted to learn more about image building for Openstack and other virtual environments. I’ve done it by hand for the customized OSes at my old job, but I haven’t had the chance to explore any automated solutions. I was pointed to Packer as a tool to build several different images at the same time (and automatically). It sounds like a great project and I’m going to use this post to get up to speed with using the basics. One quick caveat from the outset is that I’m not going to use Amazon at first. I’ll be running against my home Openstack lab since it’s free and a good excuse to get my homelab back in order.

##Install Packer##

I’ve got a shiny new Macbook, and installing Packer was actually really easy. The way I did it depended on homebrew, but you can also install manually from from their docs here.

  • In terminal, ensure that you have homebrew setup by issuing brew.
  • Add the necessary tap with brew tap homebrew/binary.
  • Finally, install packer with brew install packer.
  • You can test it’s installed by simply issuing packer in the terminal.

##Get Openstack Ready## I run an all-in-one deployment of RDO Openstack at home. Obviously, there’s a million different ways to deploy, but here and here are the pieces that I followed. It’s important to note that in my lab, instances come alive on a private network, then get access to my router’s 192.168.1.0/24 block via floating IPs. This will come in to play a bit later with the Packer template.

  • Get a known good image into Glance by importing one of the big distros. I used the Ubuntu 14.04 LTS image found here. You can just put that link into Glance’s import dialog. My final dialog looked like this:

  • Take note of the new image’s UUID, we’ll need that later:

[rsmitty@localhost ~(keystone_admin)]$ nova image-list
+--------------------------------------+-------------------+--------+----------+
| ID                                   | Name              | Status | Server
+--------------------------------------+-------------------+--------+----------+
| bf2ad7f1-3823-4ad2-a788-44a25827c93e | cirros            | ACTIVE |
| b3a4368b-7368-45e5-bfe4-63f59d732c41 | ubuntu 14.04      | ACTIVE |
+--------------------------------------+-------------------+--------+----------+

##Write Packer Template## Okay, time to get busy. Let’s write a template for Packer to create an image list. We’ll need to gather some info first.

  • Get your keystone info by catting out your keystonerc file. For me, this was cat keystonerc_admin. Some info below has been changed to protect the innocent.
[rsmitty@localhost ~(keystone_admin)]$ cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=testpass
export OS_AUTH_URL=http://192.168.1.200:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '
  • Create a new json file somewhere on your machine. I simply called mine packer_template.

  • There’s a lot of options for Openstack in Packer (found here). Some of this will vary by the way your particular Openstack deployment is set up, but for me, this template contains all of the necessary basic fields:

{
  "builders": [
    {
      "type": "openstack",
      "username": "admin",
      "password": "testpass",
      "provider": "http://192.168.1.200:5000/v2.0",
      "ssh_username": "ubuntu",
      "project": "admin",
      "region": "RegionOne",
      "image_name": "Packer Test Image",
      "source_image": "b3a4368b-7368-45e5-bfe4-63f59d732c41",
      "flavor": "0d7e469c-e99b-4267-b154-35874b224f54",
      "networks": ["0296eb7d-7f94-4cc1-b42f-f2d680b81359"],
      "use_floating_ip": true
    }
  ]
}

Notes about what’s what:

  1. username & password: Map to OS_USERNAME and OS_PASSWORD from source file
  2. provider: Maps to OS_AUTH_URL
  3. region: Maps to OS_REGION_NAME
  4. source image: UUID of the Ubuntu image we talked about earlier
  5. flavor: UUID of my m1.tiny flavor. Beware, this changes on any flavor update!
  6. networks: UUID of my private network. Can be an array of several networks.
  7. use_floating_ip: As mentioned earlier, floating IP allows Packer to actually SSH to this server across my home network.

##Test Time!## Let’s see if this thing will actually create an image for us.

  • Save your template if you haven’t already.

  • Validate the template to make sure there aren’t any glaring errors with packer validate NAME_OF_TEMPLATE.json. This should return the text ‘Template validated successfully.’

  • Run the template with packer build NAME_OF_TEMPLATE.json. For me, this gave the following output when everything completely worked:

Spencers-MBP:Desktop spencer$ packer build packer_template
openstack output will be in this color.

==> openstack: Creating temporary keypair for this instance...
==> openstack: Waiting for server (82db25b2-e1a5-4aef-be4a-cfccf744e103) to become ready...
==> openstack: Created temporary floating IP 192.168.1.204...
==> openstack: Added floating IP 192.168.1.204 to instance...
==> openstack: Waiting for SSH to become available...
==> openstack: Connected to SSH!
==> openstack: Creating the image: Packer Test Image
==> openstack: Image: 70a610e9-302a-40f4-a4ca-59b6ad260e63
==> openstack: Waiting for image to become ready...
==> openstack: Terminating the source server...
==> openstack: Deleting temporary keypair...
Build 'openstack' finished.
  • Nice! Seemed to work. Now if we head out to the Glance UI, we can see that our shiny new image hanging out!

##Well, Now What?## So we’ve built an image with Packer, which is great. But the real value here comes with building on multiple platforms at the same time and also doing some provisioning to install the necessities before creating the image.

This tutorial is getting pretty long in the tooth, so I’m not going to add another provider to create an image on, but I do want to actually install something to actually change something about the image. Let’s install Apache as part of the build. Note that in a proper environment, we would probably just install Apache and we would let our config manangement tool handle deploying our webpage, since that’s the kind of thing we would want to checkout from version control at boot time.

Here’s the template:

{
  "builders": [
    {
      "type": "openstack",
      "username": "admin",
      "password": "testpass",
      "provider": "http://192.168.1.200:5000/v2.0",
      "ssh_username": "ubuntu",
      "project": "admin",
      "region": "RegionOne",
      "image_name": "Packer Test Image",
      "source_image": "b3a4368b-7368-45e5-bfe4-63f59d732c41",
      "flavor": "0d7e469c-e99b-4267-b154-35874b224f54",
      "networks": ["0296eb7d-7f94-4cc1-b42f-f2d680b81359"],
      "use_floating_ip": true
    }
  ],
   "provisioners": [{
    "type": "shell",
    "inline": [
      "sleep 30",
      "sudo apt-get update",
      "sudo apt-get install -y apache2"
    ]
  }]
}