This post will detail how to host git repos on a server that you own. I’ll be covering how to set up your sever-side repo and then how to connect from a remote machine via SSH.
##Setup Our Server##
First and foremost, we’ll need to install git. This is going to depend on your package manager, but I’m using CentOS right now, so I’ll be issuing
Now we’ll need to add a user to our system for git. Let’s do that and then switch to that user with:
Now that we are the git user, we can setup the SSH keys that we want to accept by making the authorized keys file and putting the public keys of each user we want to have access in this file. After creating this directory and file, we need to set the permissions on them properly or SSH will complain.
Add the desired public SSH keys in authorized keys. You can add several of these if you have a desire for several users to have access to this git repo. Just separate the keys by putting them on a new line. This should look something like:
##Create Git Repo##
It’s time to finally create our git repo. Let’s create an easy directory called /git/ and a subdirectory under that for our test project. We need to switch back to our normal user (with sudo ability) to create a directory at the root. You can do that simply by issuing ‘exit’.
Now, back as the git user, initialize the git repo by using the ‘git init’ command inside that directory:
##Test It Out##
Back on your local machine, let’s verify that this is actually working for us. This should be as simple as doing a git clone to the proper path on the remote server:
Change into the local testproject directory and create a file for our first commit:
Let’s add, commit, and push the file up.
Now we’ve got a fully functional git repo with a master branch. All ready to go!
Today’s post will go into some detail on getting started with
Rerun is a tool that’s kind of meant to bridge the gap between having a
bunch of sysadmin scripts and a full-blown configuration management tool.
The truth is that a lot of times, groups have a bunch of bash scripts
that can perform differently on different machines or exist in several different
versions. This makes it hard to ensure that you’re always using the right one,
the right flags are being passed, etc., etc. Rerun sets out to help wrangle your
shell scripts and present them as something super easy to use.
Installing Rerun is really just a ‘git clone’ and then adding a bunch of
variables to your .bash_profile. I rolled it all into a script so it can just be
run (at your own risk). Just issue chmod +x whatever_you_name.sh,
followed by ./whatever_you_name.sh.
Exit the terminal and restart, then issue rerun to see if it’s working.
This should give you a list of the modules installed:
##Create a Module & Command##
Now let’s run through the Rerun tutorial.
A lot of this part of the post will be a rehashing of that page, with some differences
here and there to keep myself from just copying/pasting and not actually committing this
to memory. We will be creating a waitfor module that simply waits for a variety of different
conditions like ping to be available at a given address, a file to exist, etc..
Rerun uses a module:command type syntax, where module is kind of the general idea
of what you’re trying to do, while command is the specifics. So, let’s use the stubbs
module’s add-module command to create the bones for our waitfor module:
Okay, now let’s add a ping command to our waitfor module with
Note that this command creates both a script and a test.sh file. script is what
will actually get run, the test file is for us to write a test plan.
For ping, we’ll want to add a host and an interval option. Host will
be required, while we will set the interval option with a default and make overriding
Set the required host option:
Set the optional interval option:
Let’s make sure our params look right by checking the output with rerun waitfor.
Rerun gives a pretty easy to read/understand output when you try to figure out what
a module is capable of.
##Implement the Command##
So now we’ve got our command created, but it doesn’t actually do anything. Rerun
can’t read our mind, so it just lays down some basics and it’s up to us to implement
Open the file ~/rerun/modules/waitfor/commands/ping/script for editing.
Scroll down to the bottom, where you will see:
Replace the ‘Put the command implementation here’ with your code. I had to throw
in a -t flag in the ping command to timeout quicker on Mac.
For our ping check, the code will look like:
Test it out with a call to localhost. This should always return a positive ping.
rerun waitfor:ping --host localhost --interval 1
Okay, let’s write the tests for our new command. This will help us ensure it’s working
the right way.
Open ~/rerun/modules/waitfor/tests/ping-1-test.sh for editing. Remove the whole
We’ll create two new functions. One will check that the required host is present.
The other will check that localhost responds as expected. These tests are straight
from the wiki tutorial with extra comments to explain what’s actually happening.
Finally, let’s check that the output of the stubbs:test command to make sure
our tests pass. Issue rerun stubbs:test --module waitfor --plan ping
##Extend, Extend, Extend##
Now that we have learned all of the functionality from the official tutorial, it’s time to extend our module to do other things. Consider what the ‘waitfor’ module is for. It is there to wait on things in general, not just ping responses. So let’s extend our module to support another wait use case, waiting for a file to exist.
First let’s add the new command to our module. This is as simple as it was earlier, just pass the proper options as needed:
Add options for the filepath we want to check, as well as the interval we want to wait to check:
Time to implement the actual logic behind our file checker. You’ll notice that since this command is similar in function to our ping command, a lot of the same logic that we used previously still applies. Here’s the relevant bash from ‘waitfor/commands/file/script’:
We can now see this in action by issuing our command, waiting for a few cycles to occur, then touching the file that we want to exist in another terminal. For me, the touch command was simply touch /tmp/test.txt.
Finally, we would want to write some tests around this command to ensure it functions as expected when variables are missing, etc.. This post is getting pretty lengthy, so I will leave that task up to you.
And that’s it! I hope you enjoyed this intro to Rerun. It’s a really fun tool to use once you pick up the basics, and it really makes it dead simple to allow other team mates (even those who may not be very adept with bash) to execute scripts in a known, repeatable manner.
Continuing on my thread of exploring new technologies for my new job, today I’ll
be looking at CFEngine and how we can use it for configuration management. I’ve
used other tools like Chef and Ansible in the past, but CFEngine is a new one
for me. I’ll be installing and configuring a server and some nodes in my home
##Setup the Server##
I’m going to use the instructions for CFEngine enterprise for this tutorial. It
appears to be free for the first 25 nodes, so it will be nice to test against the
version that I may actually have to use at work.
Create a server in Openstack and go ahead and SSH in. I had to use a Ubuntu 12.04
LTS image for this. 14.04 LTS returned an error about not being supported. I imagine
that will be fixed in the future.
Open the /etc/hosts file for editing and add an entry for the private IP address to
give it a hostname. The script below with fail if hostname -f doesn’t return
anything. I added this to my hosts file:
10.0.0.29 cfengine-server.localdomain. You may also have to enter
sudo hostname cfengine-server.localdomain.
Grab the CFEngine install script with
Make it executable with chmod +x quick-install-cfengine-enterprise.sh.
Run the script with sudo rights and pass the hub argument to specify that this
will be a central hub server:
sudo ./quick-install-cfengine-enterprise.sh hub
Bootstrap the CFEngine hub with sudo /var/cfengine/bin/cf-agent --bootstrap 10.0.0.29
We should now be able to login to the server’s web UI by going to the floating
IP address in a browser. The default login information is admin/admin. Make sure
your default security group lets port 80 in.
##Setup the Clients##
Now let’s get some clients set up so that we have some systems to actually manage
with our snazzy new server. This process is almost exactly the same as the above,
with the exception of the argument passed to quick-install-cfengine-enterprise.sh.
I won’t copy/paste everything from above, but just follow the same steps and when
you get there, issue this command instead:
sudo ./quick-install-cfengine-enterprise.sh agent
One last possible caveat here. I created a Ubuntu 12.04 image with the CFEngine
client installed and it caused a kernel panic on boot. I’m not sure what was going on,
but using a 14.04 image worked just fine.
Once you get the client setup completed, you should see your new nodes checked in
in the web UI.
As I was trying to write an ISO to a USB drive, I wanted to see the progress when
using the ‘dd’ command line tool. I found a quick pointer on StackOverflow to use
the ‘pv’ command, so I adapted a little to use on a Mac. This will also serve
as a guide on how to write ISOs on Mac. Here’s how:
Install pv with homebrew: brew install pv
Find your USB drive with diskutil list. Should be pretty easy to spot the
USB drive as it will be smaller than the other disks. Tread lightly though, don’t
mess with your hard drive. I’ll use /dev/disk3, as that’s what my command returned.
Unmount it with diskutil unmountDisk /dev/disk3.
Become root with sudo su
Write your iso with this general layout, substituting paths where necessary:
dd if=/path/to/your.iso | pv | dd of=/dev/disk3 bs=1024k.
As part of a new job I’m taking, I wanted to learn more about image building
for Openstack and other virtual environments. I’ve done it by hand for the customized
OSes at my old job, but I haven’t had the chance to explore any automated solutions.
I was pointed to Packer as a tool to build several different images at the same time (and automatically). It sounds like a great project and I’m going to use this post to
get up to speed with using the basics. One quick caveat from the outset is that
I’m not going to use Amazon at first. I’ll be running against my home Openstack lab
since it’s free and a good excuse to get my homelab back in order.
I’ve got a shiny new Macbook, and installing Packer was actually really easy.
The way I did it depended on homebrew, but you can also install manually from
from their docs here.
In terminal, ensure that you have homebrew setup by issuing brew.
Add the necessary tap with brew tap homebrew/binary.
Finally, install packer with brew install packer.
You can test it’s installed by simply issuing packer in the terminal.
##Get Openstack Ready##
I run an all-in-one deployment of RDO Openstack at home. Obviously, there’s a
million different ways to deploy, but here
are the pieces that I followed. It’s important to note that in my lab, instances
come alive on a private network, then get access to my router’s 192.168.1.0/24 block
via floating IPs. This will come in to play a bit later with the Packer template.
Get a known good image into Glance by importing one of the big distros. I used
the Ubuntu 14.04 LTS image found here. You can just put that link into Glance’s import dialog. My final dialog looked like this:
Take note of the new image’s UUID, we’ll need that later:
##Write Packer Template##
Okay, time to get busy. Let’s write a template for Packer to create an image list.
We’ll need to gather some info first.
Get your keystone info by catting out your keystonerc file. For me, this was
cat keystonerc_admin. Some info below has been changed to protect the innocent.
Create a new json file somewhere on your machine. I simply called mine packer_template.
There’s a lot of options for Openstack in Packer (found here). Some of this will
vary by the way your particular Openstack deployment is set up, but for me, this
template contains all of the necessary basic fields:
Notes about what’s what:
username & password: Map to OS_USERNAME and OS_PASSWORD from source file
provider: Maps to OS_AUTH_URL
region: Maps to OS_REGION_NAME
source image: UUID of the Ubuntu image we talked about earlier
flavor: UUID of my m1.tiny flavor. Beware, this changes on any flavor update!
networks: UUID of my private network. Can be an array of several networks.
use_floating_ip: As mentioned earlier, floating IP allows Packer to actually
SSH to this server across my home network.
Let’s see if this thing will actually create an image for us.
Save your template if you haven’t already.
Validate the template to make sure there aren’t any glaring errors with
packer validate NAME_OF_TEMPLATE.json. This should return the text
‘Template validated successfully.’
Run the template with packer build NAME_OF_TEMPLATE.json. For me, this
gave the following output when everything completely worked:
Nice! Seemed to work. Now if we head out to the Glance UI, we can see that our
shiny new image hanging out!
##Well, Now What?##
So we’ve built an image with Packer, which is great. But the real value here comes
with building on multiple platforms at the same time and also doing some provisioning
to install the necessities before creating the image.
This tutorial is getting pretty long in the tooth, so I’m not going to add another provider to create an image on, but I do want to actually install something to actually change something about the image. Let’s install Apache as part of the
build. Note that in a proper environment, we would probably just install Apache
and we would let our config manangement tool handle deploying our webpage, since
that’s the kind of thing we would want to checkout from version control at boot