Ansible: From OSX to AWS

My goal in this post is to go from 0 to Ansible installed on my Mac and then be able to provision AWS instances ready to run Docker containers.  The code for this post is public on my github account.

OSX Setup

I am running OS X Yosemite.  I use brew to make things easy.  Install homebrew first.  This makes it easy to install Ansible:

brew install ansible

I have one machine up at AWS right now.  So let’s test talking to it.  First, we create the hosts file:

Now we put in our host:

instance1

I can do it this way because I have a file ~/.ssh/config  that looks like the following:

Now we can test:

Where to go next? I downloaded the free PDF by @lhochstein that has 3 chapters that goes over the nuts and bolts of Ansible so I was ready to bake an image.  But first let’s look at how Ansible is installed on OS X:

The default hosts file is, as we already saw, in /usr/local/etc/ansible/hosts .  We also have a config file we can create in ~/.ansible.cfg .  More on that later.

The other thing we have is the default modules that shipped with Ansible.  These are located in /usr/local/Cellar/ansible/1.7.2/share/ansible/  (if you’re running my same version)

If you look in this directory and subdirectories you’ll all the modules that Ansible comes with.  I think all of these modules have documentation in the code, but the easiest way to read the documentation is to run

ansible-doc <module-name>

Since we need to provision instances then we can look at the ec2 module:

This gives us a lot of information on modules you can use to deploy ec2 instances.

An nginx Playbook

Let’s take a step back and do something simple like deploy nginx on our instance using an Ansible Playbook.

I create an ansible file called ~/Code/ansible/nginx.yml .  The contents are the following:

I then created the file  ~/Code/ansible/files/nginx.conf

Finally, I created the ~/Code/ansible/files/index.html

With this I run the command:

ansible-playbook nginx.yml
If you are lucky, you have cowsay installed.  If so, then you get the cow telling you what’s happening.  If not, then you can install it:
brew install cowsay
Now, navigate to the IP address of the instance, and magic!  You have a web server configured by Ansible.  You can already see how useful this is!  Now, configuring a web server on an AWS instance is not todays hotness.  The real hotness is creating a docker container that runs a web server.  So we can just tell Ansible to install docker.  From there, we would just install our docker containers and run them.

A Docker Playbook

In one of my previous blog entries, I showed the steps I took to get docker running on an Ubuntu image.  Let’s take those steps and put them in an Ansible playbook:

Here we use some of the built in modules from Ansible that deal with package management.   You can see the descriptions and what’s available by reading Ansible’s documentation.

We run this on our host:

ansible-playbook -vvv docker.yml 

And now we can ssh into the host and launch a container:

sudo docker run --rm -i -t ubuntu /bin/bash

This to me is the ultimate way to automate our infrastructure:  We use Ansible to create our instances.  We use Ansible to set up the environment for docker, then we use Ansible to deploy our containers.

All the work for our specific application settings is done with the Dockerfile.

Provisioning AWS Machines

Up until now, all of our host information has been done with one host: instance1 that we configured in our hosts file.  Ansible is much more powerful than that.   We’re going to modify our ~/.ansible.cfg  file to point to a different place for hosts:

This uses my AWS keypair for logging into the remote servers I’m going to create.  I now need to create the inventory directory:

mkdir ~/Code/Ansible/inventory

Inside this directory I’m going to put a script: ec2.py.  This script comes with Ansible but the one that came with my distribution didn’t work.


The ec2.py file also expects an accompanying ec2.ini file:

You can modify this to suit your environment.  I’m also assuming you have boto installed already and a ~/.boto file.  If not, see how I created mine here.

Let’s see if we can now talk to our hosts:

ansible all -a date
Hopefully you got something back that looked like a date and not an error.   The nodes returned from this list will all be in the ec2 group.  I think there is a way to use tags to further make them distinct, but I haven’t had a chance to do that yet.
We now need to lay our directory structure out for something a little bigger.  The best practices for this is listed here.  My project is a little more simple as I only have my ec2 hosts and I’m just playing with them.  This stuff can get serious.  You can explore how I lay out my directories and files by viewing my github repository.
The most interesting file of the new layout is my ~/Code/Ansible/roles/ec2/tasks/main.yml file.  This file looks like the below:

I use a variable file that has these variables in the {{ something }} defined.  Again, check out my  github repo.   This file provisions a machine (similar to the configuration from my python post I did) and then waits for SSH to come up.

In my root directory I have a file called site.yml that tells the instances to come up and then go configure the instances.  Can you see how magic this is?

we run:

ansible-playbook site.yml

This makes Ansible go and deploy one ec2 host.  It waits for it to become available, and then it ssh’s into the instance and sets up docker.  Our next step would be to create a few docker playbooks to run our applications.  Then we can completely create our production environment.

One step closer to automating all the things!

If you found errors, corrections, or just enjoyed the article, I’d love to hear from you: @vallard.