Kubernetes on Metacloud (COPC)

The Kubernetes 1.0 launch happened July 21st at OSCON here in Portland, OR and I was super happy to be there in the back of the room picking up loads of free stickers while the big event happened.  I spent the day before at a Kubernetes bootcamp, which was really just a lab on using it on GCE (or GKE for containers) and it was pretty cool.  But now I felt I really should do a little more to understand it.

To install Kubernetes on Metacloud (or what Cisco now calls Cisco OpenStack Private Cloud) I’m using CoreOS.  I like CoreOS because its lightweight and built for containers.  There are a few guides out there like the one on Digital Ocean that is pretty outdated (not even a year old!) that was good.  For installing Kubernetes on CoreOS on OpenStack its pretty easy now!

I should note, that I’m using Cisco OpenStack Private Cloud, but these steps can be used with any OpenStack distribution.  I followed most of the documentation based on the Kubernetes documentation.  (You’ll notice on there site that there is no instructions for OpenStack.  I opened an issue of which I hope to help with).

Anyway, the gist is here with all the instructions, but I’m more of the mindset to use Ansible.

Install Kubernetes

First download the cloud-init files that Kelsey Hightower created.  These make installing this super simple.  Get the master and the node.

We then create a master task that looks something like this:

Here I’m heavily using environment variables that should be defined elsewhere.  I call them out with a vars_file that has most of these.  The credentials are stored in the ~/.bash_profile and so live externally to the vars_file.  That’s where we keep our username, endpoints, and password.

You’ll have to have a CoreOS image already created in your cloud to use this.  I got mine from here.  Then I used glance and uploaded it.

The user_data points to use the file that was created by the Kubernetes community and will configure upon boot the parameters required for Kubernetes.

The minion nodes configuration is similar:

Note that you have to edit the node.yml file to point to the master (kube01 in our example).

At this point I’ve been a little lazy and didn’t go do the variable substitution.  Someday, I’ll get around to that.  But as a hint, since we registered ‘nova’ in the first task we can get the private IP address with this flag:

Just put that after the creation of the master.

The github repo for this is here.

Using Kubectl

Once our cluster is installed we can now run stuff.  I have a mac so I set it up like this following the instructions here:

Now we set the proxy up so we can run kubectl on our master:

Make sure that when you check your path, kubectl from /usr/local/bin shows up instead of maybe one from Google’s GCE stuff.

Check that it works by running:

Now we can launch something!  Let’s use the Hello World example on the Kubernetes documentation site.  Create this file and name it hello-world.yaml

Then create it:

There are several other examples as well and I encourage you to go to the official guides!  Let me know if this was helpful to you with a quick hello on Twitter!


Go with NX-API

I’ve been working on a project to collect data from Cisco Nexus switches.  I first tackled making SNMP calls to collect counter statistics but then I thought, why not try it with the NX-API that came with the Nexus 9ks?

The documentation for the APIs I hoped would be better, but the samples on github were enough to get anybody going… as long as you do it in Python.  But these days the cool systems programmers are moving to Go for several reasons:

  1. The concurrency capabilities will take your breath away.  Yes, I go crazy with all the go routines and channels.  Its a really cool feature in the language and great for when you want to run a lot of tasks in parallel.  (Like logging into a bunch of switches and capturing data maybe? )
  2. The binaries can be distributed without any hassle of dependencies.  Why does this matter?  Well, for example, if I want to run the python-novaclient commands on my machine, I have to first install python and its dependencies, then run pip to install the packages and those dependencies.  I’m always looking for more lube to make trying new things out easier.  Static binaries ease the friction.

After playing around with the switch I finally got something working so I thought I’d share it.  The code I developed is on my sticky pipe project.  For the TL;DR version of this post check out the function getNXAPIData for the working stuff.  The rest of this will walk through making a call to the switch.

1.  Get the parameters.

You’ll have to figure out a way to get the username and password from the user.  In my program I used environment variables, but you may also want to take command line variables.  There are lots of places on the internet you can find that so I’m not going into that with much detail other than something simple like:

2. Create the message to send to the Nexus

The NX-API isn’t a RESTful API.  Instead, you just enter Nexus commands like you would if you were on the command line.  The  NX-API then responds with output back in JSON notation.  You can also get XML, but why in the world would you do that to yourself?  XML was cool like 10 years ago, but let’s move on people!  There’s a JSON RPC format, but I don’t get what this gives you that JSON doesn’t other than order by adding flags to order things.  Stick with JSON and your life will not suck.

Here’s how we do that:

This format of a message seems to handle any JSON that we want to throw at the Nexus.  This is really all you need to send your go robots forth to manage your world.

3.  Connect to the Nexus Switch

I start by creating an http.NewRequest.  The parameters are

  • POST – This is the type of HTTP request I’m sending
  • The switch – This needs to be either http or https (I haven’t tried with https yet).  Then the switch IP address (or hostname) has to be terminated with the /ins directory.  See the example below.
  • The body of the POST request.  This is the bytes.NewBuffer(jsonStr) that we created in the previous step.

After checking for errors, now we need to set some headers, including the credentials.

This header also tells us that we are talking about JSON data.

Finally, we make the request and check for errors, etc:

That last line with the defer statement closes the connection after we leave this function.  Launching this should actually get you the command executed that you are looking to do.  Closing is important cause if you’re banging that switch a lot, you don’t want zombie connections blocking your stack. Let him that readeth understand.

Step 4: Parse the Output

At this point, we should be able to get something to talk to the switch and have all kinds of stuff show up in the rest.Body.  You can see the raw output with something like the following:

But most likely you’ll want to get the information from the JSON output.  To do that I created a few structures that this command should respond back with nearly every time.  I put those in a separate class called nxapi.  Then I call those from my other functions as will be shown later.  Those structs are:

These structs map with what the NXAPI usually always returns in JSON:

The outputs may also be an array if there are multiple commands entered.  (At least that’s what I saw via the NX-API developer sandbox.   If there is an error then instead of Body for the output you’ll see “clierror”)

So this should be mostly type safe.  It may be better to omit the Body from the type Output struct.

Returning to our main program, we can get the JSON data and load it into a struct where we can parse through it.

In the above command, I’m looking to parse output from the “show version” command.  When I find that the input was the show version command, then I can use the keys from the body to get information from what was returned to us by the switch.  In this case the output will give us the hostname of the switch.


This brief tutorial left out all the go routines and other fanciness of Go in order to make it simple.  Once you have this part, you are ready to write a serious monitoring tool or configuration tool.  Armed with this, you can now make calls to the NX-API using Go.  Do me a favor and let me know on twitter if this was useful to you!  Thanks!


Notes from Dockercon 2015


I was fortunate enough to be able to attend my first Dockercon this year in San Francisco.  I wanted to write a thoughts I had while attending the conference.

1.  The Marketing

Docker and other startups win this so well.  Every logo is cute and the swag made for good things to take home to my kids.  But seriously, the docker whale made out of legos, the docker plush toy distributed at the day 2 keynote, the lego docker kits, the stickers, the shirts!  Wow!  I think the work that the team has done is fantastic.  When I compare that to the work we do at Cisco with a product called “Cisco OpenStack Private Cloud” I just shutter to think that we could do a lot better.

I want to call out especially the work that Laurel does for Docker.  The design, the comics, everything just worked so well and thought this was by far the star of the show.  She was even nice enough to respond to some questions I had on Twitter!

I will say this though.  There were lots of booths stacked with flashy logos, cool T-shirts and stickers, but may have been more frosting than cake.  I decided I needed to invent a product and call it Cisco Spud and make a cute potato logo and see if I could get some interest around here.

2.  Microsoft

Microsoft’s demo was beyond impressive.  If it works like it showed in the demo then this is something Microsoft developers can be really excited about.  The demo showed a fully integrated solution of running Visual Studio on a Mac, then submitting through Microsofts own continuous integration deployment, all with containers.  The demo then went on to show containers running in Azure.  Microsoft’s booth was full of swag showing love for Linux via Docker containers.  Good show by Microsoft!

I’ll add one more thing here:  Microsoft said they were the number one contributor to Docker since last April.  Now, why do you think that is?  Pretty simple:  Lots of Windows code.  Its funny how you can spin something that is in your own self serving interest as something that is good for the community.

FullSizeRender-3 IMG_8902

3.  What are people paying for?

It was pretty obvious from this conference and from a previous talk given by Adrian Cockcroft of Battery Ventures at DockerCon EU that people are not willing to pay for middleware like Docker.  I would extend that to say people don’t seem to be willing to pay for plumbing.  There were several networking companies I spoke with including Weave Networks where they are basically giving away their open source networking stacks for people to use.  That doesn’t bode well for a company like Cisco that makes its money on plumbing.  So what are people paying for and what can we learn from DockerCon?

  1. Subscriptions to Enterprise versions of Free things.  People are paying for subscription services and support like RedHat has shown.  Docker introduced its commercial trusted  registries for businesses.  This is great for people who need a little hand holding and want a nice supported version of the Registry.  Its not too hard for an organization to just spin one of these up themselves (as I showed in a previous blog post) but that is froth with security problems and cumbersome to secure.  Consumption Economics FTW.  But it seems the key is to launch a successful open source product and then offer the commercial support package.
  2. Logging & Analytics.  As shown by Splunk and others people are still willing to pay to visualize the logs, data, and to manage all the overwhelming information.  I thought this slide shown by Ben Golub was insightful for the enterprise.  People are looking to harness big data, logging, analytics.  I was surprised to see how high HortonWorks was in this.  There were several visualization companies in the booths for which I’m sad I didn’t have time to talk to all of them. FullSizeRender-2
  3. Cloud Platforms and Use based Services.  This should be no surprise, but what was surprising were the number of talks I attended where Docker was used on Prem in customers own data centers.  I was half expecting this conference to be an AWS love fest as well, but it wasn’t.  With Azure’s show of containers in the marketplace and AWSs continued development of ECS we have a sure place that companies can make money:  Offering a cloud platform where people can run these darn things!

Maybe there were other things you noticed there that people were willing to pay for? Not T-shirts!  Those were given out as freely as the software!

4.  The future of PaaS

I’ve been a strong proponent of how current PaaS platforms are doomed and already irrelevant.  I think Cloud Foundry and OpenShift may have some relevance today, but I certainly see no need for them (yes, I’m myopic, yes, I lack vision, fine).  Instead Containers provide the platform as a Service required.  While several PaaS vendors were on site to show their open source wares, I just don’t get why I need it when I can just have a set of containers available and people can use those.  This was further cemented by the demonstration of Docker’s Project Orca.


Orca made me quickly forget all of Microsoft’s shiny demos.  This was what I was really expecting to see unveiled at DockerCon: The vCenter of Docker containers.  While this is still in locked down mode, the demo was great.  It had a lot of the features you’d want to be able to see where your containers are, what they’re running, etc. If there were a mode where users could login and then get a self service view of containers and what they can launch, this would be all you needed from a PaaS.  Maybe that’s what OpenShift and Cloud Foundry do today with a lot of extra bloat, but I am expecting big things from this project as well as another monetization stream for Docker.  As Scott Johnston said:  “There’s got to be a commercial offering in here somewhere!” when announcing the commercial offerings, I suspect this one could eventually lead to even greater revenues.

The Vision

I had a front row seat as you can see :-)

I enjoyed the opening keynote presentation by Solomon Hykes best.  He laid out 3 Goals and some subgoals while introducing various things you’ve probably already read about (appC, Docker Networking, etc. )

Goal 1: Program the Internet for the Next 5 years.

  1. Runtime: a container (Docker Engine)
  2. Package and Distribution (Docker Hub)
  3. Service Composition: (Docker Compose previously loved as Fig)
  4. Machine Management (Docker Machine)
  5. Clustering (Docker Swarm)
  6. Networking (Docker Network)
  7. Extensibility (Docker Plugins)

Goal 2: Focus on Infrastructure Plumbing

  1. Break up the Monolith that is Docker, introduced RunC (which is Docker Engine, which is only 5% of existing code)
  2. The Docker Plumbing project will take a long time, but will be useful.  Make things using the Unix way:  Small, simple, single purpose built tools.
  3. Docker Notary: Secure the plumbing.  How can we do better downloads.  I was sad to see this didn’t use the BlockChain method, but maybe that’s cause I’m too much of a BitCoin zealot.

Goal 3: Promote Open Standards

This part was great.  This is where CoreOS and Docker kissed and made up on stage.  I loved the idea of an open container project, and I loved how every titan and their spinoff was a logo on it lending their support.

runC is now the default container format, and I’m expecting big things as we move forward with it.


This was a really neat conference to attend.  What I liked best, was talking to the strangers at some of the tables I dined at.  I wished I could have done more of it.  I’m not in it for the networking of people, but selfishly for the networking of ideas.  I asked a lot of questions:  Are you running a private registry?  How are you securing it? are you running in production?  What are you currently working on?  What are you trying to solve?

It’s hard though, cause I’m not a complete extrovert.  I’m more of an extroverted introvert. I’m also sad to say I didn’t do enough of it, and that will be my resolve for the next conference:  Connect with more strangers!


Getting Started with AWS EMR

AWS Elastic Map Reduce (EMR) is basically a front end to an army of large EC2 instances running hadoop.  The idea is that it gets its data from S3 buckets, runs the jobs, and then stores it back in S3 buckets.  I skimmed through a book on doing it, but didn’t get much out of it.  You are better off learning cool algorithms, and general theory instead of specializing in EMR.  (Plus the book was dated).  Also, the EMR interface is pretty intuitive.

To first get data up to be worked on, we have to upload it to s3.  I used the s3cmd but you could use the web interface as well.  I have a mac, so I ran the below scripts to install and configure the command line:

The configure command should have tested to make sure you have access.  Once you do, you can create a bucket. I’m going to make one for storing sacred texts.

Now let’s upload a few files

Side Note:

How much does it cost to store data? Not much.  According to the pricing guide, we get charged $0.03 per GB per month.  Since this data, so far isn’t even over 1GB, we’re not hurting.  But then there’s also the other requests.  GET requests are $0.004 per 10,000 requests.  Since I’m not going to be using that, we should be ok.  There’s also the data transfer pricing.  To transfer into AWS its free.  To transfer out (via the Internet) it costs nothing for the first GB/month.

You can see how this can add up.  Suppose I had 300 TB of data.  It costs $0 to put in, but then costs $8,850 (300,000GB * $0.0295/GB) / month to sit there. That adds up to be $106,200/yr.  If you wanted to take that out of AWS then it costs  $15,000 to move it. (300,000GB * $0.050/GB)

Creating EMR cluster and running

Now let’s create an EMR cluster and run it.  EMR is really just a front end for people to launch jobs on preconfigured EC2 instances.  Its almost like a PaaS for Hadoop / Spark / etc.  The nice thing is that it comes with useful tools and special language processing tools like Pig.  (Things that Nathan Marz discourages us from using.).

First create a cluster.

Screen Shot 2015-06-16 at 11.15.30 AM


We chose the sample application at the top that does word count for us.  We then modify it by telling it to read from our own directory (s3://sacred-texts/texts/).  This will then load all of our texts and get the word count of each of the files. Screen Shot 2015-06-16 at 12.46.05 PM

The cluster then provisions and we wait for the setup to complete.  The setup takes a lot longer than the actual job takes to run! The job soon finishes:

Screen Shot 2015-06-16 at 12.43.36 PM

Once done we can look at our output.  Its all in the s3 bucket we told it to go to. Traversing the directory we have a bunch of output files:

Screen Shot 2015-06-16 at 12.53.07 PM

Each one of these is a word count of each of the parts: (some of part-0000 is shown below)

This is similar to what we did in the previous post, but we used hadoop and we did it over more files than just one piece of text.  We also wrote no code to do this.  However, its not giving us the most meaningful information.  In fact, this output doesn’t give us the combined info.  To do that, we can process it by combining it and then using a simple unix sort on it:

Now there are so many questions we could start asking with data and when you have computing power to help you ask these questions.  For example, we can search twitter for ‘happy because’ and find out what people are happy about.  Or ‘bummed’ or ‘sad because’ and find out why people are sad using simple word counts.

At the end I deleted all my stuff on s3

I had to clear the logs as well.  How much did this cost?

Well, I had to do it three times to get it right.  Each time it launched a cluster with 3 m3.xlarge sizes.  If we were using them as standard EC2 instances then it would be $0.280/hr, but since we used them for EMR, it only costs us $0.07/hr.  So 9 * 0.07 = $0.63 to try that out.

You can see how this can be a pretty compelling setup for small data and for experimenting.  This is the main point.  Experimenting is great with EMR but when it comes to any scale of infrastructure, the costs can get high pretty quick.  Especially if you are always churning the data and constantly creating new batches with jobs running all the time as new data comes in.

If you are curious, I put the data out on github.  Also, to note, the total cost for this experiment was about $0.66 ($0.63 for EMR instances + $0.03 for S3 storage). Pretty cheap way to get into the world of big data!

Data Analysis

Data analytics is the hottest thing around.  Knowing how to structure and manipulate data, and then find answers is the sexiest job out on the market today!

Cisco Live

Before getting into cool things like apache spark, hadoop, or using EMR, let’s just start out with a basic example:  Word count on a book.

Project Guttenburg has a ton of books out there.  I’m going to choose one and then just do some basic text manipulation.  Not everything needs big data and there’s a lot you can do just from your own laptop.

I’m going to borrow from a great post here and do some manipulation on some text I found.  I’m not going to use any hadoop.  Just plain python.

I’ll use the same script, but I stripped out the punctuation with some help from Stack Overflow:


Here’s the pass:

We’re simply printing all the text and then piping to our mapper program and then storing the results in the bom-out.txt file.

So now we have bom-out.txt which is just a word with a count next to it:

foo 1

Now we need to sort it.

So now we have bom-sorted-out.txt.  So next up, we do the word count.

This gives us the output, but now let’s see which word is used the most.  This is another sort.

This gives some of the usual suspects:

We could probably do better if we were to make it so case didn’t matter.  We could then also do it in a one pass script.  Let’s try it.

Version 2

In the mapper script we change the last line to be:

This way it spits everything out in lowercase.  Now to run the script in one line, we do the following:

Now our output looks a little different:

What we’ve shown here is the beginning of what things like Hadoop do for us.  We have unstructured data and we apply two operations:  Map:  This is where we do the count of each word.  Reduce:  This is where we count how many times each word was done.  In this case our data set wasn’t too huge and could be done on our laptop.

Here’s another book:

There are still a few problems but this seems to work well.  The next step is to use a natural language processing kit and find similar phrases.  We then could dump this into HDFS and process all kinds of books.  Lots of interesting places to go from here!

Last note: I did upload the data and scripts to github.

Continuous Delivery of a Simple Web Application Tutorial – Part 4

In Part 1 we discussed the architecture of what we’re trying to build.

In Part 2 we created our development server with Ansible.

In Part 3 we finished off Ansible with creating our Load Balancers and WebServers and setup Git to check everything in.

In this last part, we’re going to configure Jenkins to use all of these components and do the orchestration for the application delivery pipeline.

1. Configure Global Security

When this Jenkins container first comes up, we lock it down so people can’t access it.  This is done by clicking configure Jenkins and then Configure Global Security.  The ‘Enable Security’ button should be clicked and then you can decide how you want the others to be set.  Mine looks like the screenshot below.

Screen Shot 2015-06-15 at 9.05.43 AM

Notice that you have the option of tying this into an LDAP system.  I’m just using the local database and not allowing new users to sign up.

When you hit apply, you’ll then be able to create an account.  Once created, log back in with this account.

2. Install Plugins

The great thing about Jenkins is all the different plugins it provides to be able to test and automate the tasks.  I added a few plugins for my setup, but you may need more to run the test cases.

I used the following plugins:  Gitlab, OpenStack Cloud Plugin, Docker Build and Publish, Slack, and SSH.







Once added, I applied the changes and restarted the system.

3. Configure Global Gitlab Plugin

From the main window, we go to Manage Jenkins and then Configure System.  We’ll add the Gitlab integration.  This is as simple as filling out the form for out Gitlab instance.  Using the internal hostname for this is the easiest way.  However, if you put your gitlab instance in another Cisco OpenStack Private Cloud availability zone then you may need to specify the public IP.  My configuration looks as below:


The API Token is found in the profile section under Gitlab.  Look at the account icon and then the ‘Account’ field.  Test the connection to make sure you can connect and that it shows ‘Success’.


4.  Configure global OpenStack Plugin

Connecting to Cisco OpenStack Private Cloud is now just a matter of entering in the right credentials.  We first add a new cloud.


The tricky thing here is to enter the right credentials.  Make sure the Identity you enter is in the form: <project name>:<username>.


Notice in the above example, the user name is jenkins-ci but the project name is LawnGnomed Production.  The credential is just the password of the user.  We used the RegionOne for the region.  This will probably be the same for most installations.

Once we are attached to the cloud we need to make sure we have a slave that can be used.  The requirements of this slave are that Java is already installed.  CoreOS doesn’t come with Java installed so we have to add it.

5.  The Jenkins Slave Instance

There was a great post I found that describes a method to get Java running on CoreOS as part of a post install script.  Using this I cloned the image and then booted subsequent images from this image.  I also made sure the jenkins user keys were installed so that the jenkins user could log into the instance without any passwords.  This is really all that is required for the slaves.  Once the image was built, I created the plugins to this image in Jenkins:


Note that the place for Jenkins to configure jobs is in /home/core/jenkins.

We also created core credentials so that Jenkins could log into the jenkins slave.


You should be able to now save that and test the cloud instance. Notice here that you can attach to multiple clouds including something like AWS, Digital Ocean, or others by simply using more plugins or configuring other credentials.  There’s a great post by Jessie Frazelle of Docker showing how they use Jenkins to test lots of permutations of code build.  I used to think about all the permutations that we would support in our development environments and there was no way a human could test it all.  A robot like Jenkins, however, could easily do it and keep track of all those permutations of linux distros, etc.

6.  Add Slack to Jenkins

Whenever something builds, finishes building, has an error, etc, we want the whole team to be alerted.


Slack gives us this power.  We simply plugin the integration token we got from Slack’s plugin menus.

7.  Create the Project!

Ok, now that we have configured the global settings of Jenkins, let’s create a project that takes as input a Git push notification and then builds a docker container.

Create a basic Freestyle project:



Now we need to configure it.  We’ll use a lot of the settings we already used from our global configuration.

7.1 Restrict where builds can be done.

Builds will want to run on the master Jenkins server.  By using the label we gave it in the global settings we can make sure that whenever this builds, it has to run on one of our slaves.



7.2. Slack Notifications

If you want slack to notify you on certain conditions, check all that apply!

Screen Shot 2015-06-15 at 10.20.23 AM


7.3 Configure Source Code Management

Screen Shot 2015-06-15 at 10.22.45 AM

There are some non-intuitive settings here, but this is what worked for me.  I also had a build trigger settings appear as below:

Screen Shot 2015-06-15 at 10.24.31 AM

7.4 Build Environment

The build environment is where we specify what type of slave we want to build on.

Screen Shot 2015-06-15 at 10.25.30 AMHere I make sure that each slave is only used once and that I use the correct slave instance.  This is the cool part cause each time a job is submitted this is the setting that will build a new instance on top of Metacloud.

7.5.  Build actions

The last part is what do we want the robot to do.  He’ll now provision an instance when code is pushed, but what do you want him to do once its pushed?  This is where you’d run all your test cases.  But here, I’m just going to build a docker container and put it into my local docker registry.

This assumes you have a Dockerfile in your code you are testing.  In the advanced settings you specify a subdirectory if any of where it is.  Mine is in the rails/ directory.

Screen Shot 2015-06-15 at 10.33.23 AM

Notice that you would do ‘Add build step’ to do more for running tests and things on the container.  I just left it to build the container.  The fun part in that was making the container as prebuilt as possible (e.g: putting all the gems in first) and then running the build.  By pulling from a local registry instance, it makes the build go a lot faster than if I were pulling from Docker Hub every time.

When the build is done, I push to the repository and then do the post build action of ‘Update WebServers’.  This is another Jenkins job that simply goes through and SSH’s to my web servers and swaps out the running container for a new one.


That was a pretty intense ride and I hope I’ve covered most of it.  I demonstrated this at Cisco Live in several sessions that I’ll post on here as they become available.  There are a few things that I think are important to summarize:

1.  I wrote no code to get this environment up and running.  I did do configuration with tools but most of my work was just to configure and integrate.

2.  Most problems we are facing have already been solved or looked at by very smart people.  So look for easy done work instead of trying to reinvent the wheel.

3.  The stack is completely portable to different cloud providers.  I did this on Cisco OpenStack Private Cloud.  This could have worked just as easy on another OpenStack system. All the API calls were the same.

There were several steps I may have left out as it did require a lot of configuring.  The Ansible Scripts can all be found on my Github account in the Cisco Live 2015 repo.  Please let me know if there are any questions via the comments or via Twitter.

Thanks for reading!!!

Continuous Delivery of a Simple Web Application Tutorial – Part 3

In Part 1 we discussed the architecture of what we’re trying to build.

In Part 2 we created our development server with Ansible.

In Part 3 we will finish off Ansible with creating our Load Balancers and WebServers.  Then we’ll setup Git and check everything in.

In Part 4 we will finish it all off with Jenkins configuration and watch as our code continuously updates.

At this point, all we’ve done is create the development environment, but we haven’t configured any of it.  We’re going to take a break from that for a little bit and set up our web servers and load balancers.  This should be pretty straight forward.

Load Balancers

Cisco OpenStack Private Cloud / Metacloud actually comes with several predefined images.  One of those is the MC-VLB, which is a preconfigured server running HAProxy with the Snapt front end.  You can see all the documentation for managing the HAProxy via the GUI using their documentation.

We’re just going to configure it with Ansible.  We’ve created a file in our ansible directory called load-balancers.yml.  This file contains the following:

We are using the encrypted vars/metacloud_vars.yml file to pass in the appropriate values.  The flavor-id corresponds to what we saw in the GUI.  Its actually a flavor size created specifically for this load balancer image.

Once the VM is up, then we give it the role load-balancer.  This goes to the roles/load-balancer/tasks/main.yml task.  This task looks as follows:

Pretty simple as it just copies our config and restarts the load balancers.  This is one case where we’re not using containers in this setup.  We could have just created our own image using nginx to do it or even haproxy, but we thought it was worth taking a look at the instance to see what Metacloud provided.

The key to this is the /etc/haproxy/haproxy.cfg file.  This file is as follows:

This configuration should highlight one of the glaring problems with our environment.  We’ve put the web servers (which we haven’t even created yet!) in this static file.  What if we want to add more?  What if we get different IP addresses? While this blog won’t go over the solutions, I’d welcome any comments.

Now running:

Our load balancers will come on line and be ready to serve traffic to our web instances.  Let’s create those now.

Web Servers

Calling these instances ‘web servers’ is probably not correct.  They, in fact will be running docker containers that have the appropriate web services on them.  These servers will look just like the development server we created in the previous blog.

This script should look very similar to what you saw in deploying the development server.  The server boots up and it runs the web-config.sh script.  This script is exactly the same as the one in part 1 except at the very end of it, it brings up the latest application container:

This is nice because its completely automated.  The server goes up and the latest web service starts.  We could remove the instance and create a new one and it would get the latest.

As long as there is one server up, our website lawngnomed.com will stay up.  By putting java on it, Jenkins can use it to run commands and by putting Ansible on it we can configure it if we need to.

Since you haven’t created the ci:5000/vallard/lawngnomed:latest docker image, yours probably won’t work.  But you could give it a docker hub image instead to make sure it gets something and then starts running.

Let’s bring up the web servers in their current state:

Taking Stock

At this point we have accomplished 3 things:

  1. Development Server with all the services are installed
  2. Load Balancers are up and pointing to web servers
  3. Web servers are ready, but don’t have any images yet to run.

Our next step is to start configuring all those services.  This is where our Ansible work is done.  We are using it solely for creating the environment.

Gitlab configuration

Navigating to our public IP address and port 10080, or if you put DNS and are using the nginx reverse proxy, we can now see the login screen.  The default root password is 5iveL!fe.  We are using some of the great containers that were built by Sameer Naik.


We will be forced to create a new password.

create password

Then we need to lock things down.  Since we don’t want just everyone to sign up we can go to the settings page (click the gears in the top right side) and disable some things:


From here we can add users by clicking the ‘Users’ item in the left sidebar.

Add users

I created my vallard user and that is where I’ll upload my code.  Log out as root (unless you need to add more users) and log in with your main account.

The first thing you’ll want to do is create a project.  You may want to create two projects.  One for infrastructure as code (the Ansible scripts we’ve done) and another for the actual application.  Clicking on the cat icon in the top left side will take you to the dashboard.  From there you can create the new projects.  Once you create them you are given instructions on how to set up a git environment.  They look like this:

The first problem you will have if you do this is that you haven’t put your ssh key into Gitlab.  Click on the profile settings icon in the top right (the little person) and click on SSH keys.  Here you can upload your own.

Protip:  run the following command to copy the public key on your mac to the paste buffer:

Entering this in the screen should then allow you to do your first git push.


The Jenkins User

At this point you may want to decide whether or not to create a Jenkins user to do the continuous integration.  We created a Jenkins user and gave that user its own SSH key as well as a login to the Cisco OpenStack dashboard.  Since we created this new user, we also created a keypair for him so that he could get into the instances he created.  Copy the jenkins ssh key-pair to a safe place as we’ll be using it soon.  Add the Jenkins user to your project so that he can check out the code and see it.

End of Part 3

If you got to this part, hopefully you have pushed your Ansible code we created into Gitlab.  You also may have created a Jenkins user that can be used for our continuous integration.  Please let me know if you had any issues, comments, suggestions, or questions along the way.  I want to help.

In the last part 4 we will go over configuring Jenkins and integrating it into Gitlab.  Then we will create some tasks to automatically run to test our systems.


Continuous Delivery of a Simple Web Application Tutorial – Part 2

In Part 1 we gave the general outline of what we are trying to do, the tools we’re using, and the architecture of the application.

In this part (Part 2) we’re going to work on building the development environment with Ansible.  This includes the Jenkins, Gitlab, a private Docker Registry, and a proxy server so we can point DNS to the site.

In Part 3 we configure the load balancers and the web site.

In Part 4 we configure Jenkins and put it all together.

The Ansible code can be found on Github in the Cisco Live 2015 repo.

Get the CoreOS Image

We’ll need an image to work with.  While we can do this on the command line, its not something we’re going to repeat to often so I think we’re ok doing this the lame way and use the GUI.

A simple search takes us to the OpenStack page on the CoreOS site.  I just used the current stable version.  Its pretty simple.  You follow their instructions:

I downloaded this to some Linux server that was on the Internet.  From there, I went into the Cisco OpenStack private Cloud dashboard and under images created a new one.

Screen Shot 2015-06-04 at 11.06.47 PM

You can also do this through the command line just to make sure you’re still connecting correctly:

Ok, now we have a base image to work with.  Let’s start automating this.

Ansible Base Setup


I’ve set up a directory called ~/Code/lawngnomed/ansible where all my Ansible configurations will live.  I’ve spoken about setting up Ansible before so in this post we’ll just go over the things that are unique.  The first item we need to do is setup our Development environment.  Here’s the Ansible script for creating the development node, which I gave the hostname of ‘ci’:

This playbook does the following:

  1. Creates a new server called ‘ci’
    1. ci will use a security group I already created
    2. ci will use a key-pair I already created.
    3. ci will use the cloud-config.sh script I created as part of the boot up.
  2. Once the node is created creates the following roles on it: registry, gitlab, jenkins, and ci-proxy

The metacloud_vars.yml file contains most of the environment variables.  Here is the file so you can see it.  Replace this with your own:

You can see I used a few images as I tried this out and eventually settled on using the same coreos image that my jenkins slaves run on.  We’ll get to that soon.

You’ll need to create a security group so that all the services can be access.  My security group looked as follows:

Screen Shot 2015-06-05 at 1.46.47 PM

The other security group allows port 80 and 22 so I can ssh and go to the web browser.

The next important file is the files/cloud-config.sh script.  With the script I needed to accomplish 2 things:

  1. Get Java on the instance so that Jenkins could communicate with it.
  2. Get Python on the instance so Ansible could run on it.
  3. Make it so docker would be able to communicate with an insecure registry.

CoreOS by itself tries to be as bare as it gets so after trolling the Internet for a few days I finally cobbled this script together that would do the job.


A few directories with files were created:

Let’s go through each task:


This role is creates a docker container that acts as the reverse proxy.  So that when requests like http://jenkins.lawngnomed.com come in, the proxy redirects the request to the right container.

The task file below copies the nginx configuration file and then mounts it into the container.  Then it runs the container.

The contents of the nginx default config file that will run in /etc/nginx/conf.d/default.conf is the following:

There could be some issues with this file, but it seems to work.  There are occasions when jenkins and gitlab redirect to bad urls, but everything works with this configuration.  I’m open to any ideas to changing it.

Once this role is up you can access the URL from the outside.


Gitlab requires a Redis container for key value store and a PostGrsql database.  We use Docker for both of these and link them together.  The Ansible playbook file looks as follows:

Notice that the gitlab_db_password is an environment variable created in the ../var/main.yml file.  I set this up and then encrypted the file using Ansible Vault.  See my post on how that is accomplished because its a pretty cool technique I learned from our Portland Ansible Users Group.


The Jenkins Ansible installation script is pretty straight forward.  The only catch is to make sure the directory owner is jenkins and that you mount the directory.


No tricks, here, we’re just using the latest from the docker registry.  This goes out and pulls the registry.

Loose Ends

There are a few parts that I didn’t automate that should be done.

  1. The instance I created mounts a persistent storage device that I created in Metacloud.  There are two pieces missing:
    1. It doesn’t create the volume in OpenStack if its not there yet.
    2. It doesn’t mount the volume onto the development server.
  2. For speed, its better to pull the docker containers from the local registry.  So technically we should tag all the images that we’re using and put them in the local registry.  This is a chicken and an egg problem because you need the registry up before you can download images from it.  So I left it that way.
  3. There are still some things I needed to finish like putting some keys and other items I needed for Jenkins in the /vol directory.  Its not perfect but its pretty good.

Creating the volume and mounting was pretty quick once the image was up. First I created the volume and assigned it using the Horizon Dashboard that Metacloud provides.

Screen Shot 2015-06-05 at 1.25.08 PM


This was just a 20GB volume.  Once the instance was up I ran a few commands like:

This way all of our information persists if the containers terminate and if the instances terminate.

Finishing up the Development Server

Once you get to this point, you should be able to bring it all up with:

That should do it!  Once you are in you may want to tag all of your images so that they load in the local docker registry.  For example, once you log in you could run:

At this point the idea is that you should be able to go to whatever public IP address was assigned to you and be able to access:


If you’re there then you can get rolling to the next step:  Ansible scripts to deploy the rest of the environment.

In Part 3 we’ll cover Ansible for bringing up the load balancers and web servers. We’ll also snapshot an image to make it a jenkins slave.

Continuous Delivery of a Simple Web Application Tutorial – Part 1

This will be the first of a series of posts showing how we can do continuous delivery of a simple web application.  This will be written tutorial style to show all the different components used.  We are using Open Source tools on Cisco OpenStack private cloud, but the majority of the instructions here could be used in any cloud.  This first part in the series is going to introduce the architecture and the components.

In Part 2 we configure the build environment with Ansible.

In Part 3 we configure the load balancers and the web site.

In Part 4 we configure Jenkins and put it all together.

Code for this can be found on Github in my Cisco Live 2015 repo.

If you want to see the end result of what we’re building, check out this video


A New Startup!

Screen Shot 2015-06-04 at 2.54.28 PM

Our startup is called Lawn Gnomed.  We specialize in lawn gnomes as a service.  Basically, suppose your friend has a birthday.  You engage with our website and on the day of their birthday they wake up and there are 50 lawn gnomes on their front yard with a nice banner that says: “Happy Birthday you old man!”.  We set up the gnomes, we take them down.  Just decorations.

The Requirements & the Stack

We need to react fast to changes in our business model.  Right now we’re targeting birthdays, but what if we want to target pool parties?  What about updates to our site for Fathers’ Day or Mothers’ Day?  Instead of listening to the HiPPO (The Highest Paid Person’s opinion) we need to be able to react quicker.  Try things out fast, if we’re wrong change, and do it fast.


1.  We have a private cloud.  We are part of a large corporation already.  This app fits under the category of “System of Innovation” as defined by Gartner.  We’re going to develop this on our Private Cloud.  In this case Cisco OpenStack Private Cloud (formerly Metacloud) fits the bill for this nicely.

2.  Our executives think we should leverage as much internally as possible.  Our goals are to keep things all in our private data center.  Most of these tools could use services and cloud based tools instead, but there are already plenty of tutorials out there for those types of environments.  Instead, we’re going to focus on keeping everything in house.

3.  We want to use containers for everything and keep everything ephemeral.  We should be able to spin this environment up as quickly as possible in other clouds if we decide to change.  So we are avoiding lockin as much as possible.  This may be a bad idea as some argue, but this is the choice we are going with.

The Stack

So here is our stack we’re developing with:

  • OpenStack.  In this example we are using Cisco OpenStack Private Cloud (formerly Metacloud) but we may instead decide that we want to do this on a different cloud platform, like Digital Ocean, AWS, or Azure.
  • CoreOS.  CoreOS is a popular lightweight operating system that works great for running containers.
  • Docker.  Our applications will be delivered and bundled in Docker Containers.  They are quick and fast.
  • Gitlab.  We are using the free open source version of what Github offers to most organizations.  We will be using Gitlab to check in all of our code.
  • Jenkins.  Our Continuous Integration service will be able to listen to Gitlab (or Github if you used that) and not only do our automated test cases when new changes are pushed, but will also be able to update our live servers.
  • Slack.  This is the only service we don’t host in house.  Slack allows our team to be alerted anytime there is a new build or if something fails.
  • Ansible.  We are using Ansible to deploy our entire environment.  Nothing should have to be done manually (where possible) if we can automate all the things.  We’ve mostly followed that in this article, but there are some hands on places that we feel are ok for now, but can automate later.

In this series, we will not be concentrating so much on what the app does nor the database structure, but in an effort to be complete, we will add that for now we are using a simple Ruby on Rails application that uses BootStrap with a MariaDB backend.

The Application Stack

The application will be a set of scalable web services behind a pair of load balancers.  Those in turn will talk to another set of load balancers that will house our database cluster.

The diagram below gives a high level view of our application.

  • Blue circles represent the instances that are running in our cloud.
  • Red circles represent the containers
  • Green circles represent the mounted volumes that are persistent even when containers or instances go away.

We will probably add multiple containers and volumes to each instance, but for simplicity we show it running this way.

LG-Application We have several choices on metacloud as to where we put the components.  Cisco OpenStack Private Cloud has the concept of Availability Zones which are analogous to AWS Regions.  If we have more Metacloud has theWe could if we were to do A/B testing put several components inside a different availability zone or a different project.  Similarly, we could put the database portion inside its own project, or separate projects depending on what types of experiments we are looking to run.

Screen Shot 2015-06-04 at 2.57.57 PM

Diving in a little deeper we can make each service a project.  In this case the application could be a project and the database could be a separate project within each AZ.

Screen Shot 2015-06-04 at 3.22.14 PM

Autoscaling the Stack

Cisco OpenStack Private Cloud does not come with an Autoscaling solution.  Since Ceilometer is not part of the solution today, we can’t use that to determine load.  We can, however use third party cloud management tools like those that come from Scalr or RightScale.  These communicate with Cisco OpenStack Private Cloud via the APIs as well as agents installed on the running instances.

There is also the ability to run a poor mans autoscaling system that can be cobbled together with something like Nagios and scripts that:

  1. Add or Remove instances from a load balancer
  2. Monitors the CPU, memory, or other components on a system.

Anti-Affinity Services

We would like the instances to run on separate physical hosts to increase stability.  Since the major update in the April release we have that ability to add anti-affinity rules to accomplish this.

This rule will launch web01 and web02 on different physical servers.  We mention this now as we won’t be going over it in the rest of the articles.

Logging and Analytics

Something we’ll be going over in a future post (I hope!) is how to log all the activity that happens in this system.  This would include a logging system like Logstash that would consolidate every click and put it into place where we can run analytics applications.  From this we can determine what paths our users are taking when they look at our website.  We could also analyze where are users come from (geographically) and what times our web traffic gets hit the hardest.

Cisco OpenStack Private Cloud allows us to carve up our hypervisors into aggregates.  An aggregates is a collection of nodes that may be dedicated to one or more projects.  In this case, it could be hadoop.

Screen Shot 2015-06-04 at 4.29.00 PM

The blue arrow denotes the collection of servers we use for our analytics.

Continuous Delivery Environment

A simple view of our Continuous Delivery environment is shown below

Screen Shot 2015-06-04 at 4.36.37 PMLet’s go over the steps at a high level.

  1. A developer updates code and pushes it to Gitlab.  This Gitlab server is the place where all of our code resides.
  2. When Gitlab sees that new code has been received he notifies Jenkins.  Gitlab also notifies Slack (and thus all the slackers) that there was a code commit.
  3. Jenkins takes the code, merges it, and then begins the tests.  Jenkins also notifies all the slackers that this is going to happen.
  4. As part of the build process, Jenkins creates new instances on Cisco Openstack Private Cloud / Metacloud.  Here’s what the instances do when they boot up:
    1. Download the code from gitlab that was just checked in.
    2. Perform a ‘Docker build’ to build a new container.
    3. Run test cases on the container.
  5. If the tests are successful, the container is pushed to a local Docker Registry where it is now ready for production.  Slack is notified that new containers are ready for production.
  6. A second Jenkins job has been configured to automatically go into each of our existing web hosts, download the new containers, and put them into production and remove the new ones.  This only happens if a new build passed.

This whole process in my test environment takes about 5 minutes.  If we were to run further test cases it could take longer but this illustrates the job pretty quickly.

The Build Environment

Screen Shot 2015-06-04 at 4.46.57 PM

Our build environment is pretty simple.  It consists of a single instance with a mounted volume.  On this instance we are running 4 containers:

  1. NGINX.  This does our reverse proxying so that subdomains can be hit.
  2. Jenkins.  This is the star of our show that runs the builds and puts things into production.
  3. Registry.  This is a local docker registry.  We’re using the older one here.
  4. Gitlab.  This is where we put all our code!

This shows the power of running containers.  Some of these services need their own databases and redis caches.  Putting that all on a single machine and coordinating dependencies is crazy.  By using containers we can pile them up where we need them.

The other thing to note is that all of the instances we create in OpenStack are the same type.  CoreOS 633.1.0 right now.

Getting Started

The last piece of this first part is that we’ll need to gain access to our cloud.  Not just GUI access but command line access so that we can interface with the APIs.

Screen Shot 2015-06-04 at 4.55.51 PM

Once you login to your project you can go to the Access & Security menu and select API Access.  From there you can download the OpenStack RC file.

Test it out with the nova commands:

While you may not see all the instances that I’m showing here, you should at least see some output that shows things are successful.

What’s Next

The next sections will be more hands on.  Refer back to this section for any questions as to what the different components do.  The next section will talk about:

Part 2: Setting up the Development machine.  Ansible, CoreOS, Docker, Jenkins, and all the other hotness.

  • Getting Image for CoreOS
  • Ansible Configuration
  • Cloud Init to boot up our instances
  • Deploying load balancers
  • Deploying web servers
  • Deploying Jenkins Slaves.


Debug CoreOS Cloud Init Script

When I log in after trying to debug something I find that the command prompt gives me the following:

So to figure out where my script went wrong, I simply have to run the command:

So in my case it was:

It turns out,  I put an extra ssh-keyscan in the the script and it thought it was the hostname!

Removing the extra characters worked.