Continuous Delivery of a Simple Web Application Tutorial – Part 3

In Part 1 we discussed the architecture of what we’re trying to build.

In Part 2 we created our development server with Ansible.

In Part 3 we will finish off Ansible with creating our Load Balancers and WebServers.  Then we’ll setup Git and check everything in.

In Part 4 we will finish it all off with Jenkins configuration and watch as our code continuously updates.

At this point, all we’ve done is create the development environment, but we haven’t configured any of it.  We’re going to take a break from that for a little bit and set up our web servers and load balancers.  This should be pretty straight forward.

Load Balancers

Cisco OpenStack Private Cloud / Metacloud actually comes with several predefined images.  One of those is the MC-VLB, which is a preconfigured server running HAProxy with the Snapt front end.  You can see all the documentation for managing the HAProxy via the GUI using their documentation.

We’re just going to configure it with Ansible.  We’ve created a file in our ansible directory called load-balancers.yml.  This file contains the following:

We are using the encrypted vars/metacloud_vars.yml file to pass in the appropriate values.  The flavor-id corresponds to what we saw in the GUI.  Its actually a flavor size created specifically for this load balancer image.

Once the VM is up, then we give it the role load-balancer.  This goes to the roles/load-balancer/tasks/main.yml task.  This task looks as follows:

Pretty simple as it just copies our config and restarts the load balancers.  This is one case where we’re not using containers in this setup.  We could have just created our own image using nginx to do it or even haproxy, but we thought it was worth taking a look at the instance to see what Metacloud provided.

The key to this is the /etc/haproxy/haproxy.cfg file.  This file is as follows:

This configuration should highlight one of the glaring problems with our environment.  We’ve put the web servers (which we haven’t even created yet!) in this static file.  What if we want to add more?  What if we get different IP addresses? While this blog won’t go over the solutions, I’d welcome any comments.

Now running:

Our load balancers will come on line and be ready to serve traffic to our web instances.  Let’s create those now.

Web Servers

Calling these instances ‘web servers’ is probably not correct.  They, in fact will be running docker containers that have the appropriate web services on them.  These servers will look just like the development server we created in the previous blog.

This script should look very similar to what you saw in deploying the development server.  The server boots up and it runs the script.  This script is exactly the same as the one in part 1 except at the very end of it, it brings up the latest application container:

This is nice because its completely automated.  The server goes up and the latest web service starts.  We could remove the instance and create a new one and it would get the latest.

As long as there is one server up, our website will stay up.  By putting java on it, Jenkins can use it to run commands and by putting Ansible on it we can configure it if we need to.

Since you haven’t created the ci:5000/vallard/lawngnomed:latest docker image, yours probably won’t work.  But you could give it a docker hub image instead to make sure it gets something and then starts running.

Let’s bring up the web servers in their current state:

Taking Stock

At this point we have accomplished 3 things:

  1. Development Server with all the services are installed
  2. Load Balancers are up and pointing to web servers
  3. Web servers are ready, but don’t have any images yet to run.

Our next step is to start configuring all those services.  This is where our Ansible work is done.  We are using it solely for creating the environment.

Gitlab configuration

Navigating to our public IP address and port 10080, or if you put DNS and are using the nginx reverse proxy, we can now see the login screen.  The default root password is 5iveL!fe.  We are using some of the great containers that were built by Sameer Naik.


We will be forced to create a new password.

create password

Then we need to lock things down.  Since we don’t want just everyone to sign up we can go to the settings page (click the gears in the top right side) and disable some things:


From here we can add users by clicking the ‘Users’ item in the left sidebar.

Add users

I created my vallard user and that is where I’ll upload my code.  Log out as root (unless you need to add more users) and log in with your main account.

The first thing you’ll want to do is create a project.  You may want to create two projects.  One for infrastructure as code (the Ansible scripts we’ve done) and another for the actual application.  Clicking on the cat icon in the top left side will take you to the dashboard.  From there you can create the new projects.  Once you create them you are given instructions on how to set up a git environment.  They look like this:

The first problem you will have if you do this is that you haven’t put your ssh key into Gitlab.  Click on the profile settings icon in the top right (the little person) and click on SSH keys.  Here you can upload your own.

Protip:  run the following command to copy the public key on your mac to the paste buffer:

Entering this in the screen should then allow you to do your first git push.


The Jenkins User

At this point you may want to decide whether or not to create a Jenkins user to do the continuous integration.  We created a Jenkins user and gave that user its own SSH key as well as a login to the Cisco OpenStack dashboard.  Since we created this new user, we also created a keypair for him so that he could get into the instances he created.  Copy the jenkins ssh key-pair to a safe place as we’ll be using it soon.  Add the Jenkins user to your project so that he can check out the code and see it.

End of Part 3

If you got to this part, hopefully you have pushed your Ansible code we created into Gitlab.  You also may have created a Jenkins user that can be used for our continuous integration.  Please let me know if you had any issues, comments, suggestions, or questions along the way.  I want to help.

In the last part 4 we will go over configuring Jenkins and integrating it into Gitlab.  Then we will create some tasks to automatically run to test our systems.


Continuous Delivery of a Simple Web Application Tutorial – Part 2

In Part 1 we gave the general outline of what we are trying to do, the tools we’re using, and the architecture of the application.

In this part (Part 2) we’re going to work on building the development environment with Ansible.  This includes the Jenkins, Gitlab, a private Docker Registry, and a proxy server so we can point DNS to the site.

In Part 3 we configure the load balancers and the web site.

In Part 4 we configure Jenkins and put it all together.

The Ansible code can be found on Github in the Cisco Live 2015 repo.

Get the CoreOS Image

We’ll need an image to work with.  While we can do this on the command line, its not something we’re going to repeat to often so I think we’re ok doing this the lame way and use the GUI.

A simple search takes us to the OpenStack page on the CoreOS site.  I just used the current stable version.  Its pretty simple.  You follow their instructions:

I downloaded this to some Linux server that was on the Internet.  From there, I went into the Cisco OpenStack private Cloud dashboard and under images created a new one.

Screen Shot 2015-06-04 at 11.06.47 PM

You can also do this through the command line just to make sure you’re still connecting correctly:

Ok, now we have a base image to work with.  Let’s start automating this.

Ansible Base Setup


I’ve set up a directory called ~/Code/lawngnomed/ansible where all my Ansible configurations will live.  I’ve spoken about setting up Ansible before so in this post we’ll just go over the things that are unique.  The first item we need to do is setup our Development environment.  Here’s the Ansible script for creating the development node, which I gave the hostname of ‘ci':

This playbook does the following:

  1. Creates a new server called ‘ci’
    1. ci will use a security group I already created
    2. ci will use a key-pair I already created.
    3. ci will use the script I created as part of the boot up.
  2. Once the node is created creates the following roles on it: registry, gitlab, jenkins, and ci-proxy

The metacloud_vars.yml file contains most of the environment variables.  Here is the file so you can see it.  Replace this with your own:

You can see I used a few images as I tried this out and eventually settled on using the same coreos image that my jenkins slaves run on.  We’ll get to that soon.

You’ll need to create a security group so that all the services can be access.  My security group looked as follows:

Screen Shot 2015-06-05 at 1.46.47 PM

The other security group allows port 80 and 22 so I can ssh and go to the web browser.

The next important file is the files/ script.  With the script I needed to accomplish 2 things:

  1. Get Java on the instance so that Jenkins could communicate with it.
  2. Get Python on the instance so Ansible could run on it.
  3. Make it so docker would be able to communicate with an insecure registry.

CoreOS by itself tries to be as bare as it gets so after trolling the Internet for a few days I finally cobbled this script together that would do the job.


A few directories with files were created:

Let’s go through each task:


This role is creates a docker container that acts as the reverse proxy.  So that when requests like come in, the proxy redirects the request to the right container.

The task file below copies the nginx configuration file and then mounts it into the container.  Then it runs the container.

The contents of the nginx default config file that will run in /etc/nginx/conf.d/default.conf is the following:

There could be some issues with this file, but it seems to work.  There are occasions when jenkins and gitlab redirect to bad urls, but everything works with this configuration.  I’m open to any ideas to changing it.

Once this role is up you can access the URL from the outside.


Gitlab requires a Redis container for key value store and a PostGrsql database.  We use Docker for both of these and link them together.  The Ansible playbook file looks as follows:

Notice that the gitlab_db_password is an environment variable created in the ../var/main.yml file.  I set this up and then encrypted the file using Ansible Vault.  See my post on how that is accomplished because its a pretty cool technique I learned from our Portland Ansible Users Group.


The Jenkins Ansible installation script is pretty straight forward.  The only catch is to make sure the directory owner is jenkins and that you mount the directory.


No tricks, here, we’re just using the latest from the docker registry.  This goes out and pulls the registry.

Loose Ends

There are a few parts that I didn’t automate that should be done.

  1. The instance I created mounts a persistent storage device that I created in Metacloud.  There are two pieces missing:
    1. It doesn’t create the volume in OpenStack if its not there yet.
    2. It doesn’t mount the volume onto the development server.
  2. For speed, its better to pull the docker containers from the local registry.  So technically we should tag all the images that we’re using and put them in the local registry.  This is a chicken and an egg problem because you need the registry up before you can download images from it.  So I left it that way.
  3. There are still some things I needed to finish like putting some keys and other items I needed for Jenkins in the /vol directory.  Its not perfect but its pretty good.

Creating the volume and mounting was pretty quick once the image was up. First I created the volume and assigned it using the Horizon Dashboard that Metacloud provides.

Screen Shot 2015-06-05 at 1.25.08 PM


This was just a 20GB volume.  Once the instance was up I ran a few commands like:

This way all of our information persists if the containers terminate and if the instances terminate.

Finishing up the Development Server

Once you get to this point, you should be able to bring it all up with:

That should do it!  Once you are in you may want to tag all of your images so that they load in the local docker registry.  For example, once you log in you could run:

At this point the idea is that you should be able to go to whatever public IP address was assigned to you and be able to access:


If you’re there then you can get rolling to the next step:  Ansible scripts to deploy the rest of the environment.

In Part 3 we’ll cover Ansible for bringing up the load balancers and web servers. We’ll also snapshot an image to make it a jenkins slave.

Continuous Delivery of a Simple Web Application Tutorial – Part 1

This will be the first of a series of posts showing how we can do continuous delivery of a simple web application.  This will be written tutorial style to show all the different components used.  We are using Open Source tools on Cisco OpenStack private cloud, but the majority of the instructions here could be used in any cloud.  This first part in the series is going to introduce the architecture and the components.

In Part 2 we configure the build environment with Ansible.

In Part 3 we configure the load balancers and the web site.

In Part 4 we configure Jenkins and put it all together.

Code for this can be found on Github in my Cisco Live 2015 repo.

If you want to see the end result of what we’re building, check out this video


A New Startup!

Screen Shot 2015-06-04 at 2.54.28 PM

Our startup is called Lawn Gnomed.  We specialize in lawn gnomes as a service.  Basically, suppose your friend has a birthday.  You engage with our website and on the day of their birthday they wake up and there are 50 lawn gnomes on their front yard with a nice banner that says: “Happy Birthday you old man!”.  We set up the gnomes, we take them down.  Just decorations.

The Requirements & the Stack

We need to react fast to changes in our business model.  Right now we’re targeting birthdays, but what if we want to target pool parties?  What about updates to our site for Fathers’ Day or Mothers’ Day?  Instead of listening to the HiPPO (The Highest Paid Person’s opinion) we need to be able to react quicker.  Try things out fast, if we’re wrong change, and do it fast.


1.  We have a private cloud.  We are part of a large corporation already.  This app fits under the category of “System of Innovation” as defined by Gartner.  We’re going to develop this on our Private Cloud.  In this case Cisco OpenStack Private Cloud (formerly Metacloud) fits the bill for this nicely.

2.  Our executives think we should leverage as much internally as possible.  Our goals are to keep things all in our private data center.  Most of these tools could use services and cloud based tools instead, but there are already plenty of tutorials out there for those types of environments.  Instead, we’re going to focus on keeping everything in house.

3.  We want to use containers for everything and keep everything ephemeral.  We should be able to spin this environment up as quickly as possible in other clouds if we decide to change.  So we are avoiding lockin as much as possible.  This may be a bad idea as some argue, but this is the choice we are going with.

The Stack

So here is our stack we’re developing with:

  • OpenStack.  In this example we are using Cisco OpenStack Private Cloud (formerly Metacloud) but we may instead decide that we want to do this on a different cloud platform, like Digital Ocean, AWS, or Azure.
  • CoreOS.  CoreOS is a popular lightweight operating system that works great for running containers.
  • Docker.  Our applications will be delivered and bundled in Docker Containers.  They are quick and fast.
  • Gitlab.  We are using the free open source version of what Github offers to most organizations.  We will be using Gitlab to check in all of our code.
  • Jenkins.  Our Continuous Integration service will be able to listen to Gitlab (or Github if you used that) and not only do our automated test cases when new changes are pushed, but will also be able to update our live servers.
  • Slack.  This is the only service we don’t host in house.  Slack allows our team to be alerted anytime there is a new build or if something fails.
  • Ansible.  We are using Ansible to deploy our entire environment.  Nothing should have to be done manually (where possible) if we can automate all the things.  We’ve mostly followed that in this article, but there are some hands on places that we feel are ok for now, but can automate later.

In this series, we will not be concentrating so much on what the app does nor the database structure, but in an effort to be complete, we will add that for now we are using a simple Ruby on Rails application that uses BootStrap with a MariaDB backend.

The Application Stack

The application will be a set of scalable web services behind a pair of load balancers.  Those in turn will talk to another set of load balancers that will house our database cluster.

The diagram below gives a high level view of our application.

  • Blue circles represent the instances that are running in our cloud.
  • Red circles represent the containers
  • Green circles represent the mounted volumes that are persistent even when containers or instances go away.

We will probably add multiple containers and volumes to each instance, but for simplicity we show it running this way.

LG-Application We have several choices on metacloud as to where we put the components.  Cisco OpenStack Private Cloud has the concept of Availability Zones which are analogous to AWS Regions.  If we have more Metacloud has theWe could if we were to do A/B testing put several components inside a different availability zone or a different project.  Similarly, we could put the database portion inside its own project, or separate projects depending on what types of experiments we are looking to run.

Screen Shot 2015-06-04 at 2.57.57 PM

Diving in a little deeper we can make each service a project.  In this case the application could be a project and the database could be a separate project within each AZ.

Screen Shot 2015-06-04 at 3.22.14 PM

Autoscaling the Stack

Cisco OpenStack Private Cloud does not come with an Autoscaling solution.  Since Ceilometer is not part of the solution today, we can’t use that to determine load.  We can, however use third party cloud management tools like those that come from Scalr or RightScale.  These communicate with Cisco OpenStack Private Cloud via the APIs as well as agents installed on the running instances.

There is also the ability to run a poor mans autoscaling system that can be cobbled together with something like Nagios and scripts that:

  1. Add or Remove instances from a load balancer
  2. Monitors the CPU, memory, or other components on a system.

Anti-Affinity Services

We would like the instances to run on separate physical hosts to increase stability.  Since the major update in the April release we have that ability to add anti-affinity rules to accomplish this.

This rule will launch web01 and web02 on different physical servers.  We mention this now as we won’t be going over it in the rest of the articles.

Logging and Analytics

Something we’ll be going over in a future post (I hope!) is how to log all the activity that happens in this system.  This would include a logging system like Logstash that would consolidate every click and put it into place where we can run analytics applications.  From this we can determine what paths our users are taking when they look at our website.  We could also analyze where are users come from (geographically) and what times our web traffic gets hit the hardest.

Cisco OpenStack Private Cloud allows us to carve up our hypervisors into aggregates.  An aggregates is a collection of nodes that may be dedicated to one or more projects.  In this case, it could be hadoop.

Screen Shot 2015-06-04 at 4.29.00 PM

The blue arrow denotes the collection of servers we use for our analytics.

Continuous Delivery Environment

A simple view of our Continuous Delivery environment is shown below

Screen Shot 2015-06-04 at 4.36.37 PMLet’s go over the steps at a high level.

  1. A developer updates code and pushes it to Gitlab.  This Gitlab server is the place where all of our code resides.
  2. When Gitlab sees that new code has been received he notifies Jenkins.  Gitlab also notifies Slack (and thus all the slackers) that there was a code commit.
  3. Jenkins takes the code, merges it, and then begins the tests.  Jenkins also notifies all the slackers that this is going to happen.
  4. As part of the build process, Jenkins creates new instances on Cisco Openstack Private Cloud / Metacloud.  Here’s what the instances do when they boot up:
    1. Download the code from gitlab that was just checked in.
    2. Perform a ‘Docker build’ to build a new container.
    3. Run test cases on the container.
  5. If the tests are successful, the container is pushed to a local Docker Registry where it is now ready for production.  Slack is notified that new containers are ready for production.
  6. A second Jenkins job has been configured to automatically go into each of our existing web hosts, download the new containers, and put them into production and remove the new ones.  This only happens if a new build passed.

This whole process in my test environment takes about 5 minutes.  If we were to run further test cases it could take longer but this illustrates the job pretty quickly.

The Build Environment

Screen Shot 2015-06-04 at 4.46.57 PM

Our build environment is pretty simple.  It consists of a single instance with a mounted volume.  On this instance we are running 4 containers:

  1. NGINX.  This does our reverse proxying so that subdomains can be hit.
  2. Jenkins.  This is the star of our show that runs the builds and puts things into production.
  3. Registry.  This is a local docker registry.  We’re using the older one here.
  4. Gitlab.  This is where we put all our code!

This shows the power of running containers.  Some of these services need their own databases and redis caches.  Putting that all on a single machine and coordinating dependencies is crazy.  By using containers we can pile them up where we need them.

The other thing to note is that all of the instances we create in OpenStack are the same type.  CoreOS 633.1.0 right now.

Getting Started

The last piece of this first part is that we’ll need to gain access to our cloud.  Not just GUI access but command line access so that we can interface with the APIs.

Screen Shot 2015-06-04 at 4.55.51 PM

Once you login to your project you can go to the Access & Security menu and select API Access.  From there you can download the OpenStack RC file.

Test it out with the nova commands:

While you may not see all the instances that I’m showing here, you should at least see some output that shows things are successful.

What’s Next

The next sections will be more hands on.  Refer back to this section for any questions as to what the different components do.  The next section will talk about:

Part 2: Setting up the Development machine.  Ansible, CoreOS, Docker, Jenkins, and all the other hotness.

  • Getting Image for CoreOS
  • Ansible Configuration
  • Cloud Init to boot up our instances
  • Deploying load balancers
  • Deploying web servers
  • Deploying Jenkins Slaves.


Debug CoreOS Cloud Init Script

When I log in after trying to debug something I find that the command prompt gives me the following:

So to figure out where my script went wrong, I simply have to run the command:

So in my case it was:

It turns out,  I put an extra ssh-keyscan in the the script and it thought it was the hostname!

Removing the extra characters worked.


CoreOS, Ansible, OpenStack, and a Private Registry

This took me longer than I want to admit to figure out, so I thought I’d post this solution here.  I’m doing this on Cisco’s OpenStack Private Cloud (COPC) (formerly known as Metacloud).

Problem:  Want to deploy a CoreOS instance that can access docker images from a private registry.  I want to do this with Ansible.

Why its hard:  Not a lot of good documentation on this put in one place.  I kept getting this error:

Really started to aggravate me.

Ansible Playbook

Here’s the playbook in its final glory:

The coreos-cloud-config.yaml file looks like this:

There were a few things to note:

  1. If I used the config_drive: yes like it said on some documentation somewhere with this then I had some problems.
  2. I was using a different configuration for the cloud-config that had me do files instead.  Not sure why I did this, but figured it out by using the other flag.  As you can see I even opened up a problem on CoreOS github repo.   I think this is what you need to do in order to solve your own problems.  And the reason we all need a rubber duck.
  3. The CoreOS documentation shows a IP address range, but I just put in the actual registry for this and it works great.

Hoping that helps someone else not struggle like I did for hours…


Can AWS put out the private cloud?

TL;DR: No.

Today I watched a recording of Andy Jassy at the AWS summit in San Francisco from last month.  (Yes, I have a backlog of Youtube videos I’m trying to get through and John Oliver takes precedence to AWS)

The statistics he laid out were incredible:

  • 102% YoY revenue growth for the S3 service comparing Q4 2014 to Q4 2013
  • 93% YoY revenue growth for the EC2 service comparing Q4 2014 to Q4 2013
  • 1,000,000 active customers.  (Active customers do not include internal Amazon customers)
  • 40% YoY Revenue growth in total

Andy went on to say that AWS is the fastest growing multi-billion dollar enterprise IT company.  He backed this up by showing what I would guess internally they may be calling the “dinosaur slide”.  A slide that showed the lagging growth of other companies like Cisco (7%), IBM, HP, etc.  (never mind that he was comparing to the total Cisco business and not just the datacenter business that AWS competes with)

This presentation with its great guest speakers and the announcement of the new EFS service really set the Internet on fire.  There were posts such as this one: “How in the hell will any cloud ever catch AWS” and many more.

I love AWS.  After being stuck using VMware for a few years AWS just feels so right.  I love how easy it is to use and develop applications and I especially love the APIs.  But I have to take exception with the simple Dark Lord Sith logic of there is “the right way = AWS” and the “wrong way = whatever else you’re doing”.  This is what I call the Sith Lord slide that he showed:

The Sith Lord Slide from AWS Summit San Francisco April 2015


The slide and the pitch suggest rather explicitly, that if you build your own frozen datacenter and don’t use AWS you will get left behind.  Really, AWS says there is no need for your own data centers.  AWS releases new services nearly every day, 512 in 2014 (I’m not sure what all 512 are, nor what they consider a service, but this seems like marketing hyperbole).  And there is no way you or any other datacenter can catch up.

Also this last weekend there was an article that’s title looked like it was written by Bilbo Baggins called: “There and Back Again: Zynga’s Tale with Amazon’s cloud“.  The article talks about how Zynga, so hot, tried to ween itself from AWS and then after several years, decided it would ditch its own data centers and go back to AWS.  All in, according to their CEO on earnings call last week.

But that’s just like your opinion, man

So based on all that, it seems that building your own data center is a fools errand.  So in light of all this, I have to quote the Big Lebowski and say: Yeah? Well, you know, that’s just like, your opinion man.

That’s just like, your opinion, man.

A differing opinion, man

Let’s just talk in general now and then talk specifically about what we know from the Zynga article.

  1. The Enterprise: The final frontier
  2. Taxi cabs or rent a car?
  3. Feature complete or good enough?
  4. What can we build together on open source?

The Enterprise: The final frontier

AWS’s reach has largely been for startups.  My two instances in AWS count towards that 1,000,000 customer count.  Now the focus is to the enterprise where they are starting to see great success.  But the enterprise may not be as good of a fit to go ‘all in’ and there’s one good reason:

Large static workloads.  Startups have workload that comes and goes based on the customers an seasonality.  So too does the enterprise, but there is still a large amount of static workload that doesn’t change size.  Take the HR programs of these companies that are off the shelf and just live in their own data centers.  It makes good sense for a line of business organization to move a lot of its customer facing, variable driven, applications to the cloud.  But for that static work load?

Look, static workloads are as unsexy as a corner drugstore.  They are the old applications.  They are about as sexy as the applications that run on a mainframe (which who’s business continues to grow for IBM).  But for the enterprise, we still need them.  Perhaps these old workloads are the new mainframe workloads.

In the future these workloads will probably offered as SaaS based applications and then the enterprise can abandon them, but for now most of them aren’t.  In addition there are applications that are home grown, like a facilities application in certain universities that hasn’t been rewritten to be cloud aware (probably a lot of money in this by the way).

But its not just off the shelf software packs from Oracle.  It could be the companies own product:  A SaaS delivered to its customers.  If that is largely static, and we already have the data centers, why not just use them?  It doesn’t require extra capital outlay as we’ve already got them.  The only thing enterprises may be lacking is the cloud operations experience, but that is something they can buy from something like Metacloud, now Cisco OpenStack Private cloud.

Uber or rent a car?

Scott Sanchez talks about how workload infrastructure is similar to transportation options.  The cloud is like renting a taxi cab.  When you only need it now and then, it makes a lot more sense.  When I fly into Seattle, I grab a cab because its so much easier.  I don’t have to take the bus all the way to the rental car facility.  I hop into the cab and he takes the carpool lanes all the way to the city.  I pay and I’m done.  This is like the cloud.  Its super effective and faster.

But does hiring a cab make sense if I need a car all week with multiple meetings in different cities?  Hiring a cab gets really expensive and really may not be the best fit.  It costs a lot more to keep him sitting there unused while I’m in my meetings with the meter running.  In this case, I may be better off just renting a car.  Especially if its multiple days and different cities and in places where parking is free.

A best option for large enterprises would be to have a place where they can run static workloads more cost effectively and if things get dynamic, burst those workloads to the cloud.  Some apps go here, some go there.

Feature complete or good enough?

Can anybody catch AWS?  Oh yeah.  Let’s look at VMware as a case study.  Nothing even came close to offering its complete, feature rich, easy to use vSphere products.  But what happened?  In that space a new technology like AWS started to erode it and then Microsoft got ‘good enough’.  This will continue to happen, even though VMware is currently meeting earnings expectations I’m already predicting its demise.  Let’s check back in 5 years and see how 2020 earnings are.  Maybe I’m wrong?

Microsoft continues to get good enough and its new offerings are very compelling to the enterprise because they include a strategy on how to leverage their existing data center.  Their azure stack offers the link for hybrid cloud that AWS doesn’t have.  Its not perfect, but its on the track to do what enterprises need:  Connect the static with the variable.

What about feature complete?  No service is really feature complete, but for the average programmers and IT organizations, how many of those AWS services are people really using?   For the beginning organizations just give me EC2, ELB, S3, cloud formation, and cloud watch for autoscale and I’m pretty good.

Guess what?  OpenStack already has at minimum 2nd or 3rd generation projects that already do that.  Consider the rise of Digital Ocean with its cheaper services, limited features, and hyper growth.

The other issue is newer clouds don’t need the baggage of the old guard.  If we had a cloud service that was container based (docker, rkt) then we could use that.  AWS has ECS but this current version leaves a lot to be desired.  A container based cloud I’m convinced is the future and the only real cloud we’ll need.  (I’ve been wrong before… a lot), but as a full stack guy, I’m going all in on containers.  I don’t know if Docker will survive, but containers will.

What can we build together

This brings me to my last point and this is how I think everybody else can win:  If we standardize our private clouds based on an open architecture and then at some point connect those together then we can really do something incredible.  If we did that we could:

1.  Offer cheaper cloud services than even AWS to our customers.  Private cloud wins every time on cost, but it isn’t about cost why people use AWS:  Its about speed and features.  If we can’t get that with a private cloud there’s no reason to have it.  But if we can deliver both we win.

2.  Offer capacity up.  Think of it like a house with solar panels.  You make extra electricity and the electricity company has to pay you for what you use.  If we can create secure connections and secure data at rest then data centers can attach to this cloud-grid and consume and use services all over the place.  You will have more data centers than even AWS has to chose from.

Positioning your private data cloud around proprietary offerings limits you from the ability to engage with the larger community.  No doubt these private offerings have their place and will have their day in the sun, but I fail to see how they help build a community large enough to be sustainable; That is, unless they achieve critical mass, but then we’ve got the same problem with one player setting the rules.

An open platform offers a way for everybody to win.


Let’s turn back to Zynga.  I used to tell everybody how Zynga got so big they had to make their own cloud because their AWS bill was too big.  Zynga gave great reasons, including over-provisioning as a reason they built their own cloud.  If they’ve gone back to AWS ‘all-in’ what does it say about them and what can we infer?  Does it mean AWS killed the private cloud?  Here’s what I infer:

1.  Revenue isn’t coming in.  Trimming its workforce by 18% and jettisoning data centers are cost cutting measures so they can free capital to invest in new games.  Gaming is a tough business and it seems there is no fixed workload.

2.  Variability.  Zynga’s games may be more variable than previously thought.  This probably relates to point 1.  Did the variability in Zynga’s games make it so that they had over built capacity?

3.  Is private Cloud best served as a product or a service?  We know a little about the Zcloud.  It was at one point based on Cloudstack with RightScale to provision workloads.  Citrix sells products.  Its not an operations company.  Zynga used CloudStack before it was open source.  While all reports generally show that CloudStack is getting better, perhaps its features are not enough for Zynga and maintenance and upgrades wasn’t so easy.  CloudStack is still a distant second as best from market adoption as OpenStack is.  But OpenSource alone isn’t going to save people.  A service like Metacloud, now Cisco OpenStack Private Cloud may have saved this.

4.  Were developers happy using the internal cloud?  If they weren’t and couldn’t move as fast then perhaps they didn’t want the Zcloud.  Perhaps the Zcloud was a cause of contention the whole time it was around.

Lastly, this interesting tweet from Adrian Cockcroft:

Screen Shot 2015-05-11 at 10.14.41 AM

Can’t argue with that, but I can question it:  Did they save $100M with their own data centers and just not redeploy the capital enough?  Obviously there was a TCO analysis done, perhaps it just didn’t work out because the bets didn’t pay off.  What if House of Cards was a flop?  Netflix still pays a lot of money for AWS and I would argue has more of a sustainable advantage than Zynga does in the marketplace.  Does the result of this experiment apply to everybody?


  • I still believe in a future of loosely federated clouds that can offer capacity to each other.  I’m not ‘all-in’ on the public cloud.  Just like I’m not ‘all-in’ on getting rid of mainframes.
  • I believe a large enterprise would save and benefit from a private cloud as a service offering from something like Metacloud rather than pure open source products alone.  Metacloud mitigates risk and delivers the core capability (IaaS) that AWS provides.
  • Organizations within Enterprises should use public clouds like AWS.  It makes a lot of sense.  Even if they have private clouds, I still advocate using public clouds like I do using Uber or Taxis… Just not all the time.
  • Zynga’s outcome doesn’t apply to everyone.
  • We need better hybrid cloud solutions.  We need better ways to connect the clouds.





Deploying Instances on COPC (metacloud) with Ansible

I wanted to show a quick example of how to deploy an instance on Cisco OpenStack Private Cloud (COPC or Cisco OPC or MetaCloud) with Ansible.  Since COPC is just a fully engineered and operated distribution of OpenStack from Cisco, this blog is also applicable to normal OpenStack environments.

I’m a big fan of Ansible because everything is agentless.  I also think the team has done a phenomenal job on the docs.  We’ll be using the nova compute docs here.  I don’t have to install anything on the instances to be able to do it and I can just run it from my laptop with minimal dependencies.  Here’s how I do it with CoreOS.

1.  Get Credentials

On COPC, you can navigate to your project and download the OpenStack RC File.  This is done from the ACCESS & SECURITY tab and then clicking on the API Access tab on the right.

COPC-Access&SecurityOnce you download this file, you put it in your ~/ directory.  I use a Mac so I just added the contents to my ~/ file. It looks like this:

Now, we’re ready to role.

2. Ansible Setup

I covered Ansible in previous posts.  So I’m going to assume you already have it.  Let’s create a few directories and files.  I put all my stuff in the ~/Code directory and then under the projects directory.  I then make sure everything in this directory belongs to some sort of git repo.  Some of those are on github (like this one) and others are in a private gitlab, or a private github repository.


This file will have our info for where our inventory is.

This will be global settings for our environment.  We tell it not to use cowsay, but you can if you want.  Its kind of cute.  You may not have it installed.  We also tell it to use the contents of the inventory directory (which we’re going to create) to go to our hosts.

host_key_checking tells it that when we access a new server, not to worry if we’ve never seen the host before and attach to it anyway.  Finally, our remote user is core as this is the default user for the coreos instance that I’m using.


We create a directory called inventory and add the file hosts.  We then add our one machine (our localhost!)  The contents looks like this:

You’ll notice here I also added which python I wanted to use, just in case I had other versions on the system.  This might be good too if you were using virtual environments.


This is where we put the specifics of what we want deployed.  In our case we need to define the following:

The security group ‘default’ in my project, as seen from the dashboard actually includes port 22.  This is important so that I can ssh into it after its provisioned and do more things.

I imported my coreos image from the CoreOS OpenStack image website.  After importing it in from the dashboard, I clicked on the image to see the image ID:

Screen Shot 2015-05-01 at 2.50.42 PM   The floating IP pool is nova, I got that from looking at the dashboard as well.

Finally, the keypair is one I generated beforehand and downloaded into my server so I can log into it afterwards.


This file is our playbook.  It will provision a server.  Let’s look at the contents:

The great thing about this script is that none of the secrets are put into it.  Using the environment variables that we did by sourcing the ~/ file we are able to run the code perfectly.

Everything here is pretty self explanatory in that we are just passing in variables to the nova_compute task to bring up a new instance.  The name will be demo-server and everything else we’ve defined.  If the instance is already up, Ansible won’t go and try to provision a new one.  Its looking for demo-server, if he’s there, he won’t touch him.

3. Run the Playbook

We’re now watch the output on the dashboard and you can see it will spawn up.

Screen Shot 2015-05-01 at 3.19.13 PM


The next step is to make it so we can run Ansible playbooks on this host.  The problem right now is that coreos is just a stripped down barebones OS.  So there is no Python!  We’ll have to add a cloud init script or do something else to make this work.  I’ll save that for another post.  But if you were using Ubuntu or RedHat, you’d be good to go at this point.


All the code in this is available at github here.

Cisco OpenStack Private Cloud Developer Demo

In my new role here at Cisco I show people how powerful the Cisco OpenStack Private Cloud solution can be for developers.  I made the below video last night to demonstrate continuous integration.

The video is a scenario for a new company called  The company provides lawn gnomes as a service (LGaaS).  Under the cover of darkness LG technicians will place 100 gnomes on the front yard of an address you specify.  The person living in the address you specified will wake up to find 100 gnomes on their front yard with a special greeting.  LG wants to add a new feature to their ordering site to allow for a personalized greeting.

The flow goes as follows:

  • LG decides they want to try out a new feature
  • Developers implement feature and perform unit tests to add the feature.
  • Developers check in the feature to a private repository.  In this case they are using Gitlab, an open source version of the service that Github offers.
  • Gitlab has a trigger connected to Jenkins, the continuous integration server.  When Jenkins sees a code checkin from the project it performs integration tests on the merged code.
  • Jenkins integration tests that pass then are pushed into a new Docker Container that is uploaded to a locally hosted docker registry.
  • Jenkins has a production job that monitors the Docker registry.  When new containers are pushed up, a job is kicked off to go through the containers, take them off line and put up the new container.  The load balancer (NGINX) handles the load.

This demo will be posted on Github with all the devOps code as I continue to refine it.  Also, any suggestions are more than welcomed!  Perhaps I’m doing it wrong?  I will post my solutions and then let Cunningham’s law dictate how accurate I am.


Subview Rotation on iOS 8 with Swift while keeping static main view

That title is a mouthful.  Basically, I just want to know how to imitate the native iPhone camera app.  The camera button stays locked where the home screen is but the icons rotate with the device as well as the capture area.  Should be pretty simple right?

tl;dr:  See the code here on github that I did.


Well with iOS 8 Apple introduced the concept of adaptive layouts.  See WWDC session 216.  This introduced size classes and traits which I think is fantastic.  Except for when you want to imitate the iOS camera application.  Then you don’t know what to think.

There were two main ideas I could have used that I came across:

1.  Using two windows.  This was brilliant and I think would work.  This even came with sample code to show how to keep the rotations separate.  I had read other people saying it was a bad idea to have two UIWindows.  I played with this a little bit but it seemed too much for what I needed.  Plus, I had the UI Tab Bar controller as the root image so it was somewhat complicated.

2.  UIInterfaceOrientation.  These methods all seem to be depreciated in iOS8 and may or may not work.  The problem with these methods is that the root method gets the notification and then signals to every body else.  I may have been able to work with this but I didn’t want to go through all the way down the hierarchy and implement all these methods for those that should be static and those that shouldn’t be.

I went with UIDevice.orientation.

Here’s the steps:

1. Subclass UITabBarController

Since my main project has a tab bar as the root interface I started here.  This way I want all the views to be able to rotate and use auto layout to do this.  There’s just one subview that needs to stay the same.  This was accomplished by adding the following method to the view controller:

This makes it so that this view 1 in the tab bar won’t rotate.

2.  Subscribe for notifications in the App Delegate

I may have been able to do this in the main class, but I did it in the app delegate in case it didn’t get alerts.  Then I had that propagate another notification.  This may be a redundant step, but figured I’d try it and was too lazy to change it back.

 3.  Subscribe to notifications in the non rotating view controller.

Now to react to these we’re going to rotate the subviews that need to be rotated.  This is done in 3 methods:

Maybe you have a better way?  I’d love to know!

There is one problem with this method:  If the application launches in landscape mode then you’ll have to rotate it a few times to actually work in the right mode.

See the full code here.

Corkscrew: SSH over HTTP Proxy

I found myself behind a firewall that didn’t allow SSH to the outside world.  They did allow an HTTP proxy though.  So I thought:  Let’s make sure we can get out!  After all, I needed to download some of my code from Github, make changes, and then upload again.

Here’s how I did it.

1.  Install Corkscrew


2.  SSH Config

Now let’s make github work.  We edit ~/.ssh/config

Now we can test by running: ssh github

This gives us the familiar response:

I won’t worry about the channel 0 failure.

3. Git Config

Inside our git repository, we can update .git/config to point to the alias of github.

Now we can do git push without any issues!