Hey y’all. Hope everyone is doing well. Today we’ll walk through writing a little bot for Slack using Golang. This is pretty straightforward, so this post will also be short and sweet. That said, a good bot can absolutely be a fun and interesting way to add some extra value to Slack for your org. We have one at Solinea that responds to our requests with pics of bacon. Clearly priceless.

Use What’s Already Out There

I spent some time looking at the different golang options out there for the Slack API. I landed on this one as the one that most folks seem to be using for go. Let’s setup what we need.

  • Create a development directory. I called mine testbot. mkdir testbot; cd testbot;
  • Touch a couple of different files that we’ll use for our bot. touch testbot.go Dockerfile
  • Open up touchbot.go for editing.

Setup Slack

Before we get any further, we need to get Slack setup properly.

  • Head to https://$YOUR_ORG.slack.com/apps/A0F7YS25R-bots to get to the Bots app page.
  • Hit “Add Configuration”
  • Give your bot a name. Again, I used “@testbot”.

  • Once it’s created, copy the API Token somewhere safe. We’ll need that to connect to Slack.

This should be the minimum that’s necessary for Slack. Feel free to populate the other fields like name, description, etc.

Get Going

The slack library we’re using has some good getting started examples (for all kinds of Slack stuff!), but I just wanted the bare minimum to get a bot to respond.

  • Let’s populate touchbot.go with the following:
package main

import (
	"fmt"
	"os"
	"strings"

	"github.com/nlopes/slack"
)

func main() {

	token := os.Getenv("SLACK_TOKEN")
	api := slack.New(token)
	rtm := api.NewRTM()
	go rtm.ManageConnection()

Loop:
	for {
		select {
		case msg := <-rtm.IncomingEvents:
			fmt.Print("Event Received: ")
			switch ev := msg.Data.(type) {
			case *slack.ConnectedEvent:
				fmt.Println("Connection counter:", ev.ConnectionCount)

			case *slack.MessageEvent:
				fmt.Printf("Message: %v\n", ev)
				info := rtm.GetInfo()
				prefix := fmt.Sprintf("<@%s> ", info.User.ID)

				if ev.User != info.User.ID && strings.HasPrefix(ev.Text, prefix) {
					rtm.SendMessage(rtm.NewOutgoingMessage("What's up buddy!?!?", ev.Channel))
				}

			case *slack.RTMError:
				fmt.Printf("Error: %s\n", ev.Error())

			case *slack.InvalidAuthEvent:
				fmt.Printf("Invalid credentials")
				break Loop

			default:
				//Take no action
			}
		}
	}
}

Let’s walk through some of this. The general flow goes:

  • Retrieve a Slack API token from our environment variables.
  • Connect to Slack using the token and loop endlessly.
  • When we receive an event, take action depending on what type of an event it is.

Now, there’s other types of events that can be present, but these are the ones that give enough quick feedback to troubleshoot an error.

There’s a couple of other important bits when a “MessageEvent” occurs:

  • Get some basic info about our Slack session, just so we can fish our bot’s user name out of it.
  • Set a prefix that should be met in order to warrant a response from us. This will look like @testbot<space> for me.
  • If the original message wasn’t posted by our bot AND it contains our prefix @testbot, then we’ll respond to the channel. For now, we’ll only respond with “What’s up buddy!?!?”

Bring On The Bots

That’s actually enough to get a bot connected and responding. Let’s check it out and then we’ll make it better.

  • From your terminal, set a SLACK_TOKEN env variable with the value we got earlier from the bot configuration. export SLACK_TOKEN="xxxyyyzzz111222333"
  • Run your bot with go run testbot.go. This should show some terminal output that looks like it’s connecting to slack and reading some early events.
  • In your slack client, invite testbot to a channel of your choosing. /invite @testbot

  • Now, let’s see if our buddy responds. Type something like @testbot hey!. You should see:

But Wait, There’s More

Sweet! It works! But you’ll probably notice pretty quick that if the only thing you’re looking for is the prefix, testbot is going to respond to ANYTHING you say to it. That can get a bit annoying. Let’s draft a responder and we can filter things out a bit.

  • Create a function below your main function called “respond”. This code block should look like this:
func respond(rtm *slack.RTM, msg *slack.MessageEvent, prefix string) {
	var response string
	text := msg.Text
	text = strings.TrimPrefix(text, prefix)
	text = strings.TrimSpace(text)
	text = strings.ToLower(text)

	acceptedGreetings := map[string]bool{
		"what's up?": true,
		"hey!":       true,
		"yo":         true,
	}
	acceptedHowAreYou := map[string]bool{
		"how's it going?": true,
		"how are ya?":     true,
		"feeling okay?":   true,
	}

	if acceptedGreetings[text] {
		response = "What's up buddy!?!?!"
		rtm.SendMessage(rtm.NewOutgoingMessage(response, msg.Channel))
	} else if acceptedHowAreYou[text] {
		response = "Good. How are you?"
		rtm.SendMessage(rtm.NewOutgoingMessage(response, msg.Channel))
	}
}
  • Looking through this code block. We’re basically just receiving the message that came through and, from here, we’ll determine if it warrants a response.
  • There’s two maps that contain some accepted strings. For this example, we’re just accepting some greetings and some “how are you?” type or questions.
  • If those strings are matched, a message is sent in response.

Now, we want to update our main function to use the respond function instead of posting messages directly. Your whole file should look like this:

package main

import (
	"fmt"
	"os"
	"strings"

	"github.com/nlopes/slack"
)

func main() {

	token := os.Getenv("SLACK_TOKEN")
	api := slack.New(token)
	api.SetDebug(true)

	rtm := api.NewRTM()
	go rtm.ManageConnection()

Loop:
	for {
		select {
		case msg := <-rtm.IncomingEvents:
			fmt.Print("Event Received: ")
			switch ev := msg.Data.(type) {
			case *slack.ConnectedEvent:
				fmt.Println("Connection counter:", ev.ConnectionCount)

			case *slack.MessageEvent:
				fmt.Printf("Message: %v\n", ev)
				info := rtm.GetInfo()
				prefix := fmt.Sprintf("<@%s> ", info.User.ID)

				if ev.User != info.User.ID && strings.HasPrefix(ev.Text, prefix) {
					respond(rtm, ev, prefix)
				}

			case *slack.RTMError:
				fmt.Printf("Error: %s\n", ev.Error())

			case *slack.InvalidAuthEvent:
				fmt.Printf("Invalid credentials")
				break Loop

			default:
				//Take no action
			}
		}
	}
}

func respond(rtm *slack.RTM, msg *slack.MessageEvent, prefix string) {
	var response string
	text := msg.Text
	text = strings.TrimPrefix(text, prefix)
	text = strings.TrimSpace(text)
	text = strings.ToLower(text)

	acceptedGreetings := map[string]bool{
		"what's up?": true,
		"hey!":       true,
		"yo":         true,
	}
	acceptedHowAreYou := map[string]bool{
		"how's it going?": true,
		"how are ya?":     true,
		"feeling okay?":   true,
	}

	if acceptedGreetings[text] {
		response = "What's up buddy!?!?!"
		rtm.SendMessage(rtm.NewOutgoingMessage(response, msg.Channel))
	} else if acceptedHowAreYou[text] {
		response = "Good. How are you?"
		rtm.SendMessage(rtm.NewOutgoingMessage(response, msg.Channel))
	}
}

Final Test

  • Fire up your bot again with go run testbot.go
  • The bot should already be connected to your previous channel
  • Greet your bot with @testbot hey!
  • Your bot will respond with our greeting response.
  • Test out the second response: @testbot how's it going?

Build and Run

This section will be quick. Let’s build a container image with our go binary in it. We’ll then be able to run it with Docker.

  • Add the following to your Dockerfile:
FROM alpine:3.4

RUN apk add --no-cache ca-certificates

ADD testbot testbot
RUN chmod +x testbot

CMD ["./testbot"]
  • Build the go binary with GOOS=linux GOARCH=amd64 go build in the directory we created.
  • Create the container image: docker build -t testbot .
  • We can now run our container (anywhere!) with docker run -d -e SLACK_TOKEN=xxxyyyzzz111222333 testbot

A co-worker of mine was having some issues with KubeDNS in his GKE environment. He was then asking how to see if records had actually been added to DNS and I kind of shrugged (via Slack). But this got me a bit curious. How in the heck do you look and see? I thought the answer was at least worth writing down and remembering.

It’s Just etcd

The KubeDNS pod consists of four containers: etcd, kube2sky, exechealthz, and skydns. It’s kind of self-explanatory what each do, but etcd is a k/v store that holds the DNS records, kube2sky takes Kubernetes services and pods and updates etcd, and skydns is, guess what, a DNS server that uses etcd as its backend. So it looks like all roads point to etcd as far as where our records live.

Checking It Out

Here’s how to look at the records in the etcd container:

  • Find the full name of the pod for kube-dns with kubectl get po --all-namespaces. It should look like kube-dns-v11-xxxxx

  • Describe the pod to list the containers with kubectl describe po kube-dns-v11-xxxxx --namespace=kube-system. We already know what’s there, but it’s helpful anyways.

  • We will now exec into the etcd container and use it’s built-in tools to get the data we want. kubectl exec -ti --namespace=kube-system kube-dns-v11-xxxxx -c etcd -- /bin/sh

  • Once inside the container, let’s list all of the services in the default namespace (I’ve only got one):

# etcdctl ls skydns/local/cluster/svc/default

/skydns/local/cluster/svc/default/kubernetes
  • Now, find the key for that service by calling ls again:
# etcdctl ls skydns/local/cluster/svc/default/kubernetes

/skydns/local/cluster/svc/default/kubernetes/8618524b
  • Finally, we can return the data associated with that key by using the get command!
# etcdctl get skydns/local/cluster/svc/default/kubernetes/8618524b

{"host":"10.55.240.1","priority":10,"weight":10,"ttl":30,"targetstrip":0}

Other Notes

If you want to also test that things are working as expected inside the cluster, follow the great “How Do I Test If It’s Working?” section in the DNS addon repo here

As a follow-on from yesterday’s post, I want to chat some more about the things you could do with the k8s-sniffer go app we created. Once we were able to detect pods in the cluster, handler functions were called when a new pod was created or an existing pod was removed. These handler functions were just printing out to the terminal in our last example, but when you start thinking about it a bit more, you could really do anything you want with that info. We could post pod info to some global registry of systems, we could act upon the metadata for the pods in some way, or we could do something fun like post it to Slack as a bot. Which option do you think I chose?

Setting Up Slack

In order to properly communicate with Slack, you will need to set up an incoming webhook.

  • Incoming webhooks are an app you add to Slack. You can find the app here.

  • Once this is done, you can configure a new hook. In the “Add Configuration” page, simply select the Slack channel you would like to post to.

  • On the next page, save the Webhook URL that is supplied to you and edit the information about your bot as necessary. I added a Kubernetes logo and changed his name to “k8s-bot”.

Posting To Slack

So with our webhook setup, we are now ready to post to our channel when events occur in the Kubernetes cluster. We will achieve this by adding a new function “notifySlack”.

  • Add the “notifySlack” method to your k8s-sniffer.go file above the “podCreated” and “podDeleted” functions:
func notifySlack(obj interface{}, action string) {
	pod := obj.(*api.Pod)
	
	//Incoming Webhook URL
    url := "https://hooks.slack.com/your/webhook/url"
	
	//Form JSON payload to send to Slack
	json := `{"text": "Pod ` + action + ` in cluster: ` + pod.ObjectMeta.Name + `"}`

    //Post JSON payload to the Webhook URL
	client := http.Client{}

	req, err := http.NewRequest("POST", url, bytes.NewBufferString(json))
	req.Header.Set("Content-Type", "application/json")

	_, err = client.Do(req)
	if err != nil {
		fmt.Println("Unable to reach the server.")
	}

}
  • Update the url variable with your correct Webhook URL.

  • Notice that the function takes an interface and a string as input. This allows us to pass in the pod object that is caught by the handlers, as well as a string indicating whether that pod was added or deleted.

  • With this method in place, it’s dead simple to update our handler functions to call it instead of outputting to the terminal. Update “podCreated” and “podDeleted” to look like the following:

func podCreated(obj interface{}) {
	notifySlack(obj, "created")
}

func podDeleted(obj interface{}) {
	notifySlack(obj, "deleted")
}
  • The full file will now look like:
package main

import (
	"bytes"
	"fmt"
	"log"
	"net/http"
	"time"

	"k8s.io/kubernetes/pkg/api"
	"k8s.io/kubernetes/pkg/client/cache"
	"k8s.io/kubernetes/pkg/client/restclient"
	client "k8s.io/kubernetes/pkg/client/unversioned"
	"k8s.io/kubernetes/pkg/controller/framework"
	"k8s.io/kubernetes/pkg/fields"
	"k8s.io/kubernetes/pkg/util/wait"
)

func notifySlack(obj interface{}, action string) {
	pod := obj.(*api.Pod)
	
	//Incoming Webhook URL
    url := "https://hooks.slack.com/your/webhook/url"
	
	//Form JSON payload to send to Slack
	json := `{"text": "Pod ` + action + ` in cluster: ` + pod.ObjectMeta.Name + `"}`

    //Post JSON payload to the Webhook URL
	client := http.Client{}

	req, err := http.NewRequest("POST", url, bytes.NewBufferString(json))
	req.Header.Set("Content-Type", "application/json")

	_, err = client.Do(req)
	if err != nil {
		fmt.Println("Unable to reach the server.")
	}

}

func podCreated(obj interface{}) {
	notifySlack(obj, "created")
}

func podDeleted(obj interface{}) {
	notifySlack(obj, "deleted")
}

func watchPods(client *client.Client, store cache.Store) cache.Store {

	//Define what we want to look for (Pods)
	watchlist := cache.NewListWatchFromClient(client, "pods", api.NamespaceAll, fields.Everything())

	resyncPeriod := 30 * time.Minute

	//Setup an informer to call functions when the watchlist changes
	eStore, eController := framework.NewInformer(
		watchlist,
		&api.Pod{},
		resyncPeriod,
		framework.ResourceEventHandlerFuncs{
			AddFunc:    podCreated,
			DeleteFunc: podDeleted,
		},
	)

	//Run the controller as a goroutine
	go eController.Run(wait.NeverStop)
	return eStore
}

func main() {

	//Configure cluster info
	config := &restclient.Config{
		Host:     "https://xxx.yyy.zzz:443",
		Username: "kube",
		Password: "supersecretpw",
		Insecure: true,
	}

	//Create a new client to interact with cluster and freak if it doesn't work
	kubeClient, err := client.New(config)
	if err != nil {
		log.Fatalln("Client not created sucessfully:", err)
	}

	//Create a cache to store Pods
	var podsStore cache.Store

	//Watch for Pods
	podsStore = watchPods(kubeClient, podsStore)

	//Keep alive
	log.Fatal(http.ListenAndServe(":8080", nil))
}

Posted Up

Alright, now when we fire up our go application, we will see posts to our channel in Slack. Remember, the first few will happen quickly, as the store of our pods is populated.

  • Run with go run k8s-sniffer.go

  • View the first few posts to Slack:

  • Try scaling down an RC to see the delete: kubectl scale rc test-rc --replicas=0

Hope this helps!

Every now and again I get some pretty interesting questions from clients that stick with me. And rarer than that, I have a bit of free time and get a chance to delve into some of these stranger questions and figure out how you would actually accomplish them. Such is the case with the question “How do we listen to the Kubernetes clusters we’re spinning up and add their resources to an internal registry of systems?”. Aren’t we supposed to not care that much about our pods, and just let Kubernetes work it’s magic? Yes! But hey, sometimes you have to do weird stuff in the enterprise…

So I took this question as a bit of an opportunity to learn a bit more about golang, since my only real experience with it was looking through the Kubernetes and Docker Engine repos from time to time. Luckily, I was able to successfully hack together just enough to act on the creation and deletion of pods in my cluster. I thought this might make for an interesting blog post so other folks can see how it’s done and how one might extend this to do some more robust things. Also, you should expect this to also be a bit of a golang intro.

Learning by Example

Being that I was pretty new to golang, I felt like I needed a good example to get started parsing and learning about. I recalled from a conversation with a colleague that this type of event sniffing is pretty much exactly how KubeDNS works. The kube2sky program acts as a bridge between Kubernetes and the SkyDNS containers that run as part of the DNS addon in a deployed cluster. This program looks for the creation of new services, endpoints, and pods and then configures SkyDNS accordingly by pushing changes to etcd. This was a wonderful starting point, but it took me quite a while to grok what was happening and, after doing so, I just wanted to boil this program down to the basics and do something a bit simpler.

Hack Away

Let’s get started hacking on our k8s-sniffer program.

  • Create a file called k8s-sniffer.go on your system under $GOPATH/src/k8s-sniffer. I’m going to operate under the assumption that you’ve got go already installed.

  • Let’s add the absolute basics for a standard go program: package, imports, and main function definition

package main

import(
//Import necessary external packages
)

func main(){
//Implement main function
}
  • We’ve got the bare bones, now let’s look at importing the thing’s we’ll actually need from Kubernetes’ go packages. Update your import section to look like:
import (
	"fmt"
	"log"
	"net/http"
	"time"

	"k8s.io/kubernetes/pkg/api"
	"k8s.io/kubernetes/pkg/client/cache"
	"k8s.io/kubernetes/pkg/client/restclient"
	client "k8s.io/kubernetes/pkg/client/unversioned"
	"k8s.io/kubernetes/pkg/controller/framework"
	"k8s.io/kubernetes/pkg/fields"
	"k8s.io/kubernetes/pkg/util/wait"
)
  • Notice the imports at the top look different that the bottom. This is because the ones at the top are golang built-ins. The second ones are from github repositories and go will pull them down for you.

  • Go ahead and pull down these dependencies (it’ll take a while) by running go get -v in the directory containing k8s-sniffer.go

  • Now let’s get started hacking on the main function. After looking through kube2sky, I knew that I needed to do three things in my main, authenticate to the cluster, call a watcher function, and keep my service alive. You can do this by updating main to look like:

func main() {

	//Configure cluster info
	config := &restclient.Config{
		Host:     "https://xxx.yyy.zzz:443",
		Username: "kube",
		Password: "supersecretpw",
		Insecure: true,
	}

	//Create a new client to interact with cluster and freak if it doesn't work
	kubeClient, err := client.New(config)
	if err != nil {
		log.Fatalln("Client not created sucessfully:", err)
	}

	//Create a cache to store Pods
	var podsStore cache.Store

	//Watch for Pods
	podsStore = watchPods(kubeClient, podsStore)

	//Keep alive
	log.Fatal(http.ListenAndServe(":8080", nil))

}
  • Notice above that some of the configs need to be changed to match your own environment.

  • Also notice that many of the functions we’re using in this main function come from other packages we’ve imported.

  • If you were to run this program now, the compiler would complain about the fact that you have told it to use the watchPods function, but it doesn’t actually exist yet. Create this function above main:

func watchPods(client *client.Client, store cache.Store) cache.Store {

	//Define what we want to look for (Pods)
	watchlist := cache.NewListWatchFromClient(client, "pods", api.NamespaceAll, fields.Everything())

	resyncPeriod := 30 * time.Minute

	//Setup an informer to call functions when the watchlist changes
	eStore, eController := framework.NewInformer(
		watchlist,
		&api.Pod{},
		resyncPeriod,
		framework.ResourceEventHandlerFuncs{
			AddFunc:    podCreated,
			DeleteFunc: podDeleted,
		},
	)

	//Run the controller as a goroutine
	go eController.Run(wait.NeverStop)
	return eStore
}
  • And finally, in this function, you’ll notice that there are two handler functions called when the watchlist is updated. Create podCreated and podDeleted:
func podCreated(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod created: "+pod.ObjectMeta.Name)
}

func podDeleted(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod deleted: "+pod.ObjectMeta.Name)
}
  • The full file now looks like:
package main

import (
	"fmt"
	"log"
	"net/http"
	"time"

	"k8s.io/kubernetes/pkg/api"
	"k8s.io/kubernetes/pkg/client/cache"
	"k8s.io/kubernetes/pkg/client/restclient"
	client "k8s.io/kubernetes/pkg/client/unversioned"
	"k8s.io/kubernetes/pkg/controller/framework"
	"k8s.io/kubernetes/pkg/fields"
	"k8s.io/kubernetes/pkg/util/wait"
)

func podCreated(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod created: "+pod.ObjectMeta.Name)
}

func podDeleted(obj interface{}) {
	pod := obj.(*api.Pod)
	fmt.Println("Pod deleted: "+pod.ObjectMeta.Name)
}

func watchPods(client *client.Client, store cache.Store) cache.Store {

	//Define what we want to look for (Pods)
	watchlist := cache.NewListWatchFromClient(client, "pods", api.NamespaceAll, fields.Everything())

	resyncPeriod := 30 * time.Minute

	//Setup an informer to call functions when the watchlist changes
	eStore, eController := framework.NewInformer(
		watchlist,
		&api.Pod{},
		resyncPeriod,
		framework.ResourceEventHandlerFuncs{
			AddFunc:    podCreated,
			DeleteFunc: podDeleted,
		},
	)

	//Run the controller as a goroutine
	go eController.Run(wait.NeverStop)
	return eStore
}

func main() {

	//Configure cluster info
	config := &restclient.Config{
		Host:     "https://xxx.yyy.zzz:443",
		Username: "kube",
		Password: "supersecretpw",
		Insecure: true,
	}

	//Create a new client to interact with cluster and freak if it doesn't work
	kubeClient, err := client.New(config)
	if err != nil {
		log.Fatalln("Client not created sucessfully:", err)
	}

	//Create a cache to store Pods
	var podsStore cache.Store

	//Watch for Pods
	podsStore = watchPods(kubeClient, podsStore)

	//Keep alive
	log.Fatal(http.ListenAndServe(":8080", nil))
}

Fire Away

  • We can finally run our file and see events being created when new Pods are created or destroyed! You’ll see several alerts when you first run since the pods are getting added to the store.
spencers-mbp:k8s-siffer spencer$ go run k8s-sniffer.go
Pod created: dnsmasq-vx2sw
Pod created: default-http-backend-0zj29
Pod created: nginx-ingress-lb-xgvin
Pod created: kubedash-3370066188-rmy2n
Pod created: dnsmasq-gru7c
Pod created: kubernetes-dashboard-imtnm
Pod created: kube-dns-v11-dhgyx
Pod created: test-rc-h7v6l
Pod created: test-rc-3l1oo
  • Try scaling down an RC to see the delete: kubectl scale rc test-rc --replicas=0
Pod deleted: test-rc-h7v6l
Pod deleted: test-rc-3l1oo

Hope this helps!

Back after a pretty lengthy intermission! Today I want to talk about Kubernetes. I’ve recently had some clients that have been interested in running Docker containers in a production environment and, after some research and requirement gathering, we came to the conclusion that the functionality that they wanted was not easily provided with the Docker suite of tools. These are things like guaranteeing a number of replicas running at all times, easily creating endpoints and load balancers for the replicas created, and enabling more complex deployment methodologies like blue/green or rolling updates.

As it turns out, all of this stuff is included to some extent or another with Kubernetes and we were able to recommend that they explore this option to see how it works out for them. Of course, recommending is the easy part, while implementation is decidedly more complex. The desire for the proof of concept was to enable multi-cloud deployments of Kubernetes, while also remaining within their pre-chosen set of tools like Amazon AWS, OpenStack, CentOS, Ansible, etc.. To accomplish this, we were able to create a Kubernetes deployment using Hashicorp’s Terraform, Ansible, OpenStack, and Amazon. This post will talk a bit about how to roll your own cluster by adapting what I’ve seen.

Why Would I Want to do This?

This is totally a valid question. And the answer here is that you don’t… if you can help it. There are easier and more fully featured ways to deploy Kubernetes if you have open game on the tools to choose. As a recommendation, I would say that using Google Container Engine is by far the most supported and pain-free way to get started with Kubernetes. Following that, I would recommend using Amazon AWS and CoreOS as your operating system. Again, lots of people using these tools means that bugs and gotchas are well documented and easier to deal with. It should also be noted that there are OpenStack built-ins to create Kubernetes clusters, such as Magnum. Again, if you’re a one-cloud shop, this is likely easier than rolling your own.

Alas, here we are and we’ll search for a way to get it done!

What Pieces are in Play?

For the purposes of this walkthrough, there will be four pieces that you’ll need to understand:

  • OpenStack - An infrastructure as a service cloud platform. I’ll be using this in lieu of Amazon.
  • Terraform - Terraform allows for automated creation of servers, external IPs, etc. across a multitude of cloud environments. This was a key choice to allow for a seamless transition to creating resources in both Amazon and OpenStack.
  • Ansible - Ansible is a configuration management platform that automates things like package installation and config file setup. We will use a set of Ansible playbooks called KubeSpray Kargo to setup Kubernetes.
  • Kubernetes - And finally we get to K8s! All of the tools above will come together to give us a fully functioning cluster.

Clone KubeSpray’s Kargo

First we’ll want to pull down the Ansible playbooks we want to use.

  • If you’ve never installed Ansible, it’s quite easy on a Mac with brew install ansible. Other instructions can be found here.

  • Ensure git is also installed with brew install git.

  • Create a directory for all of your deployment files and change into that directory. I called mine ‘terra-spray’.

  • Issue git clone git@github.com:kubespray/kargo.git. A new directory called kargo will be created with the playbooks:

Spencers-MBP:terra-spray spencer$ ls -lah
total 104
drwxr-xr-x  13 spencer  staff   442B Apr  6 12:48 .
drwxr-xr-x  12 spencer  staff   408B Apr  5 16:45 ..
drwxr-xr-x  15 spencer  staff   510B Apr  5 16:55 kargo
  • Note that there are a plethora of different options available with Kargo. I highly recommend spending some time reading up on the project and the different playbooks out there in order to deploy the specific cluster type you may need.

Create Terraform Templates

We want to create two terraform templates, the first will create our OpenStack infrastructure, while the second will create an Ansible inventory file for kargo to use. Additionally, we will create a variable file where we can populate our desired OpenStack variables as needed. The Terraform syntax can look a bit daunting at first, but it starts to make sense as we look at it more and see it in action.

  • Create all files with touch 00-create-k8s-nodes.tf 01-create-inv.tf terraform.tfvars The .tf and .tfvars extension are terraform specific extensions.

  • In the variables file, terraform.tfvars, populate with the following information and update the variables to reflect your OpenStack installation:

node-count="2"
internal-ip-pool="private"
floating-ip-pool="public"
image-name="Ubuntu-14.04.2-LTS"
image-flavor="m1.small"
security-groups="default,k8s-cluster"
key-pair="spencer-key"
  • Now we want to create our Kubernetes master and nodes using the variables described above. Open 00-create-k8s-nodes.tf and add the following:
##Setup needed variables
variable "node-count" {}
variable "internal-ip-pool" {}
variable "floating-ip-pool" {}
variable "image-name" {}
variable "image-flavor" {}
variable "security-groups" {}
variable "key-pair" {}

##Create a single master node and floating IP
resource "openstack_compute_floatingip_v2" "master-ip" {
  pool = "${var.floating-ip-pool}"
}

resource "openstack_compute_instance_v2" "k8s-master" {
  name = "k8s-master"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${openstack_compute_floatingip_v2.master-ip.address}"
}

##Create desired number of k8s nodes and floating IPs
resource "openstack_compute_floatingip_v2" "node-ip" {
  pool = "${var.floating-ip-pool}"
  count = "${var.node-count}"
}

resource "openstack_compute_instance_v2" "k8s-node" {
  count = "${var.node-count}"
  name = "k8s-node-${count.index}"
  image_name = "${var.image-name}"
  flavor_name = "${var.image-flavor}"
  key_pair = "${var.key-pair}"
  security_groups = ["${split(",", var.security-groups)}"]
  network {
    name = "${var.internal-ip-pool}"
  }
  floating_ip = "${element(openstack_compute_floatingip_v2.node-ip.*.address, count.index)}"
}
  • Now, with what we have here, our infrastructure is provisioned on OpenStack. However, we want to get the information about our infrastructure into the Kargo playbooks to use as its Ansible inventory. Add the following to 01-create-inventory.tf:
resource "null_resource" "ansible-provision" {

  depends_on = ["openstack_compute_instance_v2.k8s-master","openstack_compute_instance_v2.k8s-node"]

  ##Create Masters Inventory
  provisioner "local-exec" {
    command =  "echo \"[kube-master]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" > kargo/inventory/inventory"
  }

  ##Create ETCD Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[etcd]\n${openstack_compute_instance_v2.k8s-master.name} ansible_ssh_host=${openstack_compute_floatingip_v2.master-ip.address}\" >> kargo/inventory/inventory"
  }

  ##Create Nodes Inventory
  provisioner "local-exec" {
    command =  "echo \"\n[kube-node]\" >> kargo/inventory/inventory"
  }
  provisioner "local-exec" {
    command =  "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", openstack_compute_instance_v2.k8s-node.*.name, openstack_compute_floatingip_v2.node-ip.*.address))}\" >> kargo/inventory/inventory"
  }

  provisioner "local-exec" {
    command =  "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> kargo/inventory/inventory"
  }
}

This template certainly looks a little confusing, but what is happening is that Terraform is taking the information for the created Kubernetes masters and nodes and outputting the hostnames and IP addresses into the Ansible inventory format at a local path of ./kargo/inventory/inventory. A sample output looks like:

[kube-master]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[etcd]
k8s-master ansible_ssh_host=xxx.xxx.xxx.xxx

[kube-node]
k8s-node-0 ansible_ssh_host=xxx.xxx.xxx.xxx
k8s-node-1 ansible_ssh_host=xxx.xxx.xxx.xxx

[k8s-cluster:children]
kube-node
kube-master

Setup OpenStack

You may have noticed in the Terraform section that we attached a k8s-cluster security group in our variables file. You will need to set this security group up to allow for the necessary ports used by Kubernetes. Follow this list and enter them into Horizon.

Hold On To Your Butts!

Now that Terraform is setup, we should be able to launch our cluster and have it provision using the Kargo playbooks we checked out. But first, one small BASH script to ensure things run in the proper order.

  • Create a file called cluster-up.sh and open it for editing. Paste the following:
#!/bin/bash

##Create infrastructure and inventory file
echo "Creating infrastructure"
terraform apply

##Run Ansible playbooks
echo "Quick sleep while instances spin up"
sleep 120
echo "Ansible provisioning"
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i kargo/inventory/inventory -u ubuntu -b kargo/cluster.yml

You’ll notice I included a two minute sleep to take care of some of the time when the nodes created by Terraform weren’t quite ready for an SSH session when Ansible started reaching out to them. Finally, update the -u flag in the ansible-playbook command to the user that has SSH access to the OpenStack instances you have created. I used ubuntu because that’s the default SSH user for Ubuntu cloud images.

  • Source your OpenStack credentials file with source /path/to/credfile.sh

  • Launch the cluster with ./cluster-up.sh. The Ansible deployment will take quite a bit of time as the necessary packages are downloaded and setup.

  • Assuming all goes as planned, SSH into your Kubernetes master and issue kubectl get-nodes:

ubuntu@k8s-master:~$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-0   Ready     1m
k8s-node-1   Ready     1m