Jenkins running in Docker image behind firewall

When you run jenkins behind a firewall you need it to get out.  You’ll have to set up proxies to let this happen.  Here’s how I make it run:

First, make coreos able to go out of your network:

Next edit http-proxy.conf and make it look like:

This will allow docker to get outside the firewall.

Next restart docker:

Now, on to Jenkins.  Grab Jenkins:

Make a persistent directory to store settings (should be a persistent volume mount)

Now run the container with the following JAVA_OPTS flag as documented here.

You’ll obviously want to use your proxy server instead of mine!

You should now be able to install all the plugins you need!  Hurray!

Ethereum + Docker + Terraform + Packer

I created a quick way to get a private ethereum cluster up.  I’ll be presenting some of this at Cisco Live Berlin in my Dev Net session I’ll be doing with Tom Davies.  All of the information is available at my github repo.

The high level overview, is we’ll want a way to bring up a private Ethereum chain to do our own testing.  I seeded the account with lots of ether so that we can write an unlimited number of transactions to our chain.  I’ll be exploring smart contracts in my next post but for now on, please check out the instructions at the github site and let me know how you would make it better!

https://github.com/vallard/ethereum-infrastructure

It’s just lube

When looking to optimize your application deployment cycle organizations often turn to buzzwords such as: CI/CD (Continuous Integration/Continuous Delivery), DevOps and invariably the different products/projects they’ve heard of that do this.  In this buzzword blitz its a bit easy to get confused as to what does what.  Today we are hearing: Jenkins, Kubernetes, Mesos, Ansible, Terraform, Jenkins, Travis CI, Github, etc.  For people that come from a systems background with no development experience this can be a bit daunting.

As I’ve spoken to enterprise IT organizations as well as internally to Cisco account managers I can’t help but think how true the diagram that Simon Wardley created is (also, watch this great video from OSCON 2015):

Most enterprise customers are near, at, or past the last stage and are adopting.  Some faster than others, but all of us know, we need to adapt.  In the midst of this buzzword blitz I thought I would present a slide I’ve been sharing that shows what we are looking for when we talk about agile IT from a developers point of view.

You have to look at it as you would a factory assembly floor.  On one side, you have the raw inputs coming in.  These inputs are shaped into different components so that the end result is something amazing: A new car, an airplane, an iPhone, etc.  All we are trying to do is optimize this assembly process.  That’s the goal of manufacturing and the goal of agile IT.

Putting this in the IT world you have a developer with code as the raw input.  On the other side is some amazing product that delights users.  (Or at least gets them to want to use it and perhaps even pay for it).  In the middle of these two points is friction.  This friction has been bad for a long time, and has really bogged down our application development pipeline.

lube

Agile, DevOps, etc is just lubricant.  These development tools are lube.  All you need is lube.  People go to the cloud because they want more lube.  In the TCO calculations I’ve done, it seems to always work out that using a public cloud is more expensive.  People know this.  So why do they go there? The two main reasons:

  1. Speed (more lube)
  2. Capabilities (more lube)

The issue is there isn’t enough lubricant in IT organizations.  Developers and the IT organizations have two very conflicting goals.  The developers want to create more instability by adding features and trying new systems.  On the other end, IT organizations want to create more stability.  (see the book Lean Enterprise for more discussion on this)

The other issue is that developers have won the battle of IT vs. Developers.  Business knows they need to digitize to become relevant and stay competitive and they need creative developers to do that.

Going to the cloud, implementing agile IT methodologies is just lube to get things done faster.  When people start company-dropping (like name-dropping but with new companies you may not have heard of) then you just have to remember its lube.  You look at the pipeline and ask them based on the girl and the elephant picture above:  How does this new product/project/company provide lube?    From there its not to hard to understand how one of these things you may not be familiar with works.

 

Cisco Spark APIs

I’m super excited that Cisco Spark finally gave us some APIs that we can develop into 3rd party apps.  As I’ve been doing more development and have several projects I need to do, we’ve been using Cisco Spark for collaboration.  Not having APIs was the one thing keeping me off of it for several projects.

Today I started doing the DevNet tutorials but as I already know how to work with APIs I didn’t want to just do the postman stuff so I started writing some libraries in Go.  I posted what I’ve done so far up on Github.  There’s not too much there yet, but as I start to integrate things I expect I’ll have more do do on this.

One thing that has caused frustration with me is the inability to delete chat rooms.  I have one room that’s up that I can’t even log in to see what it is.  I wish this room would just go away.  I’m thinking this may be user error but wish I could figure it out easier.

Finally, I should mention the DevNet tutorials are excellent to work with.  They have a promotion now of giving away Github credits if you finish the 3 tutorials.  I can’t wait to see what happens next :-)

Finally Finally:  If you have Jenkins integration with Spark already done, please let me know!  I’d like to use it!

 

AWS Re:Invent 2015

AWS was an amazing conference.  All of my notes of the events I went to are here. (Scroll down to read the README.md file)

Just some quick overall thoughts:

1.  Compared to Cisco, AWS really skips out on the food and entertainment.  I mean, come on, we had Aerosmith at Cisco Live and AWS gives us what?  I can’t even remember the name.  Doesn’t really matter, cause I went home that night anyway.

2.  This should be a longer event.  There were too many sessions I wanted to attend.  I was fortunate enough to attend an IoT bootcamp and that could have easily gone another day if they would have added some analysis.  I wish it would have.

3.  The announcements never stopped.  I lost count around 20, but there were a ton of new features and services.  Take Amazon snowball:  $200 to send 50TB into AWS.  Best comment on that?  Costs $1500 to move it out.  (50,000 * $0.03)

4.  The biggest surprise to me was hearing the amount of customers that use the Cisco CSR1000v.  It’s not my product to know, so I don’t feel bad saying this.  I didn’t think there were so many users of it!  Wow.  The use case was “Transitive Routing”.  Imagine having 3 VPCs.  One of them is externally connected.  Placing one pair of CSR 1000vs in that externally connected VPC allows for the other VPCs to communicate to each other using BGP internally.  Pretty cool.

5.  Everyone is in trouble.  When Amazon QuickSight was announced I thought:  Wow, if you’re into analytics in the cloud you are in trouble. I don’t know which companies may have been effected by that, but I suspect they are the tip of the iceberg.  Take New Relic for example.  Right now they are doing really well for admin analytics.  How long before AWS puts a service to do that?

6.  What I was wondering about is if they were ever going to announce some sort of on-prem solution. The closest they got to that was Amazon Snowball, bless their hearts.  It probably doesn’t make sense for them to complicate things with that and leads to more capability of intellectual property getting loose.  After all, these are linux machines, and if a managed service happened on prem, that would be easy to get into.

7.  Look out Oracle!  Woah, that was some serious swinging.  And Oracle, you have a lot to worry about.  First of all, nobody I talk to really likes you.  People have nostalgic feelings for Sun but I’ve not really talked to people that like Oracle.  Perhaps that’s because I don’t talk to database administrators as much.  But guess what?  Nobody really likes them either.  So you have a hated product ran by hated people.  Probably won’t take long for people to dump that when refresh season comes up.

8.  Lambda.  Last year, AWS introduced Lambda.  I don’t think people still really get how important Lambda is.  Its the glue that makes a serverless architecture in AWS work.  “The easiest server to manage is no server” said Werner Vogels.  This is the real future of the cloud.  Like my previous post on getting rid of the operating system said;  managing operating systems is the last mile.  VMS are a thing of the past.  Even containers are less exciting when you think about a serverless architecture.  Just a place to execute code and APIs to do all the work.  Database, storage, streaming, any service you want is just an API.  Where AWS lambda fails in my book is that its limited to only AWS services.  Imagine if this were available to extend to any cloud service.  That to me would be the real Intercloud Cisco dreams about. As more cloud APIs develop, extending Lambda to an “API Store” is something more people would find value in.  Amazon probably wouldn’t because it means people using non-AWS services.  But this is where I would be investing if I were trying to compete against AWS.  Nothing else seems to be working.

Anyway, that’s my take.  What did you think?

 

Using Packer with Cisco Metapod

Packer is such a cool tool for creating images that are used with any cloud provider.  With the same packer recipe you can create common images on AWS, Digital Ocean, OpenStack, and more!  The Packer documentation already comes with an example of running with Metapod (formerly Metacloud).

I wanted to show an example of how I use Packer to create images for both AWS and OpenStack.  Without any further commentary, here is what it looks like:

There are two sections to this file.  The builders:  AWS and OpenStack, and the provisioners (the code that gets executed on both of them.)  We specify as many different platforms as we have and then tell it what to do.  When we are finished executing this script, our Jenkins slaves will be ready on all of our platforms.  This is super nice because now we just point our robots to which ever system we want and we have automated control!

With Metapod there were a few hooks:

  • By default images are not created with public IP addresses, so you need to grab one for it to work correctly.
  • Packer actually creates a running image but then logs into it via SSH to configure it.  So this is required if you are running this from outside the firewall!

Hybrid Cloud isn’t about moving workloads between two clouds, its about having tools that can operate and deploy and tear down workloads to any cloud.  Packer is a great foundation tool because it creates the same image in both places.

 

It’s time to get rid of the operating system

We’ve abstracted many things you and I.  But its time to crack the next nut.  Let me explain why.

Bare Metal

I started out managing applications on bare metal x86 and AIX servers.  In those days we were updating the operating systems to the latest releases, patching with security updates, and ensuring our dependencies were set up in such a way that the whole system would run.  In a way, its pretty amazing that the whole complex stack operated as well as it did.  So many dependencies, abstracted away so that we didn’t have to worry about the application opening up the token ring port and we could just use a trusted stack to send things on its way.  Life was good, but it sucked at the same time:  It was slow to provision servers, boot times were atrocious especially when UEFI made boot times even longer, and it was monolithic and scary to touch.

I remember one company I visited had their entire business records (not backed up) on one server that was choking under the load.  I came in there and pxebooted that server, ran some partimage magic and migrated the server to another big beefier server.  It was the first cold migration I had ever performed.  I felt like an all powerful sorcerer.  I had intergalactic cosmic powers.  High Fives all around.

Virtualization

I started virtualization with Xen, then KVM.  VMware was something the windows guys were doing, so I wasn’t interested.  Then I realized it was based on Red Hat Linux (it has since changed) so my powers were summoned again. I started helping get application running on Virtual Machines.  I did the same thing:  I installed the operating system to the latest release.  I applied security patches, and then I made sure the latest dependencies were set up in such a way that the application would run.

But here my magical powers were obsolete.  People were desensitized to vMotion demos.  Oh, you had to shut off that machine to migrate it to another server?  We can migrate it while its still running.  Watch this, you won’t even drop a ping.  People started making the case that hardware was old news.  Why do you care which brand it is?

We could make new VMs quickly, more efficiently use our servers, (remember how much people said your power bill would drop?), and maintenance of hardware was easier.

All our problems were solved, as long as we paid VMware our dues.

Cloud IaaS

But VMware was for infrastructure guys.  The new breed of hipsters and brogrammers were like:  I don’t care what my virtualization is, nor my hardware, as long as its got an API, I can spin it up.  So we would start getting our applications working on AWS.  But here we as systems dudes started abusing Ruby to create super cool tools like Puppet or Chef and started preaching the gospel of automation.  And so when I would get applications running on the cloud, I would install the latest operating system, apply security patches, make sure my latest application dependencies were there and then run the operating system.

My magical powers of scripting came back in force.  I was a scripting machine.  Now I didn’t care about the hardware, nor the virtualization platform.  I just got things working.  All the problems were solved as long as I paid my monthly AWS bill.

Adrian Cockcroft told me and thousands of my closest friends at conferences that I didn’t have to worry about the cost, because I needed to optimize for speed.  If I optimized for speed projects would get done ahead of time so I would save money and because my projects were iterating quickly I would make more money.  We took our scripts and fed them to robots like Jenkins so we could try all our experiments.  We would take the brightest minds of our days and instead of having them work out ways to get people to Mars or find alternate power sources we would have them figure out how to get people to click on ads to make us money.  God bless us, everyone.

But we still had to worry about the operating system.

Cloud PaaS

We took a side look at PaaS for a second.  Because they said we wouldn’t have to care about the OS, nor the dependencies because they would manage it for us.  The problem was

1.  Our applications were artistic snowflakes.  We needed our own special libraries and kernel versions.  You think our applications are generic?  Yeah, it was great if we were setting up a wordpress blog, but we’re doing real science here:  We’re getting people to click on ads for A/B testing.  ‘Murica.  So your PaaS wasn’t good enough for us.  And our guys know how to do it better.

2.  We heard that it wouldn’t scale.  Let alone that we were using Python and Ruby that was never really meant to scale into the atrocious applications that became Twitter and others.  Typed languages are so easy, so we used them.

So for our one-offs, we still use PaaS but for the most part, we still install operating systems to the latest versions, install security policies and patches, and ensure our dependencies are up so we can run the applications.

We weren’t supposed to worry about the operating system, but we did.

Containers and Microservices

A cute little whale, handcrafted with love in San Francisco, then stole our hearts by making it easy to do things that we were able to do years ago.  The immutable architecture with loads of services in separate pieces would come our way and save us from the monolith.  A container a day keeps the monolith away is what it says on the T-shirt they were handing out.  I got tons of cool stickers that made me feel like a kid again and I plastered them all over my computer.

I started breaking up monoliths.  At Cisco, our giant ordering tool based on legacy Oracle databases and big iron servers was broken up and each piece was more agile than the next.  We saw benefits.  Mesos and Kubernetes are the answers to managing them and Cisco’s Mantl project will even orchestrate that.  Its really cool actually!

So how do I get a modern micro services application running today?  I create a Dockerfile that has the OS.  Then I do apt-get update to make sure all the dependencies are in place.  I use Mesos or Kubernetes to expose ports for security.  Then I make sure the dependencies are installed in the Operating system in the container.  And we’re off.

Mesos even has something called the Data Center Operating System (DCOS).  It runs containers.  But containers still run Operating Systems.  We’re still worrying about the operating system for our applications!

What’s Next? 

We’re still crafting operating systems.  We’re still managing them.  We’ve started down a journey of abstraction to deliver applications, but we haven’t cracked the final nut:  We need to make the operating system irrelevant, just as we’ve made the hardware and virtualization platform irrelevant.  The things we’re doing with scheduling across other containers used to be something the OS would deliver on a single box, but that’s not happening anymore due to the distributed nature of applications.

AWS has shown us lambda which is a great start in this direction.  Its a system that just executes code.  There’s no operating system, just a configuration of services.  It’s a glimpse into the future of what the new modern day art of the possible will be.  As we start to break down these micro services deeper in to nano services or just function calls, we need to get away from having to worry about an operating system and just a platform that runs the various components of our micro service-ized application.

We’ve gotten leaner and leaner and the people that figure this out first, and give applications the best experience to run without requiring maintenance of operating systems will win the next battle of delivering applications.

We abstracted hardware, the virtualization platform, and our services.  Now its time to go to eleven:  Get rid of the operating system.

A Full Bitcoin Client

I’ve been using bitcoin for a few years now but have only used my own wallets, Coinbase, and some other stuff.  I thought I should make a full client and put it on the network!

I used Metacloud (Cisco OpenStack Private Cloud) and spun up an extra large Ubuntu 14.04 instance that we had!  I logged in and did the following:

From there I looked at the doc/build-unix.md and followed the instructions. They worked perfectly!

Once that is done, you can start the bitcoin client by running:

It will tell you that you need to set a password and will suggest one for you.  Take the suggestion and create the file ~/.bitcoin/bitcoin.conf and copy the password in there.  It might look like:

Now you can start it:

This will take a while to download the blockchain.  You can see all the blocks as they’ll be downloaded to the ~/.bitcoin/blocks directory.

But now we have it!  A node in the bitcoin service!

Kubernetes on Metacloud (COPC)

The Kubernetes 1.0 launch happened July 21st at OSCON here in Portland, OR and I was super happy to be there in the back of the room picking up loads of free stickers while the big event happened.  I spent the day before at a Kubernetes bootcamp, which was really just a lab on using it on GCE (or GKE for containers) and it was pretty cool.  But now I felt I really should do a little more to understand it.

To install Kubernetes on Metacloud (or what Cisco now calls Cisco OpenStack Private Cloud) I’m using CoreOS.  I like CoreOS because its lightweight and built for containers.  There are a few guides out there like the one on Digital Ocean that is pretty outdated (not even a year old!) that was good.  For installing Kubernetes on CoreOS on OpenStack its pretty easy now!

I should note, that I’m using Cisco OpenStack Private Cloud, but these steps can be used with any OpenStack distribution.  I followed most of the documentation based on the Kubernetes documentation.  (You’ll notice on there site that there is no instructions for OpenStack.  I opened an issue of which I hope to help with).

Anyway, the gist is here with all the instructions, but I’m more of the mindset to use Ansible.

Install Kubernetes

First download the cloud-init files that Kelsey Hightower created.  These make installing this super simple.  Get the master and the node.

We then create a master task that looks something like this:

Here I’m heavily using environment variables that should be defined elsewhere.  I call them out with a vars_file that has most of these.  The credentials are stored in the ~/.bash_profile and so live externally to the vars_file.  That’s where we keep our username, endpoints, and password.

You’ll have to have a CoreOS image already created in your cloud to use this.  I got mine from here.  Then I used glance and uploaded it.

The user_data points to use the file that was created by the Kubernetes community and will configure upon boot the parameters required for Kubernetes.

The minion nodes configuration is similar:

Note that you have to edit the node.yml file to point to the master (kube01 in our example).

At this point I’ve been a little lazy and didn’t go do the variable substitution.  Someday, I’ll get around to that.  But as a hint, since we registered ‘nova’ in the first task we can get the private IP address with this flag:

Just put that after the creation of the master.

The github repo for this is here.

Using Kubectl

Once our cluster is installed we can now run stuff.  I have a mac so I set it up like this following the instructions here:

Now we set the proxy up so we can run kubectl on our master:

Make sure that when you check your path, kubectl from /usr/local/bin shows up instead of maybe one from Google’s GCE stuff.

Check that it works by running:

Now we can launch something!  Let’s use the Hello World example on the Kubernetes documentation site.  Create this file and name it hello-world.yaml

Then create it:

There are several other examples as well and I encourage you to go to the official guides!  Let me know if this was helpful to you with a quick hello on Twitter!

 

Go with NX-API

I’ve been working on a project to collect data from Cisco Nexus switches.  I first tackled making SNMP calls to collect counter statistics but then I thought, why not try it with the NX-API that came with the Nexus 9ks?

The documentation for the APIs I hoped would be better, but the samples on github were enough to get anybody going… as long as you do it in Python.  But these days the cool systems programmers are moving to Go for several reasons:

  1. The concurrency capabilities will take your breath away.  Yes, I go crazy with all the go routines and channels.  Its a really cool feature in the language and great for when you want to run a lot of tasks in parallel.  (Like logging into a bunch of switches and capturing data maybe? )
  2. The binaries can be distributed without any hassle of dependencies.  Why does this matter?  Well, for example, if I want to run the python-novaclient commands on my machine, I have to first install python and its dependencies, then run pip to install the packages and those dependencies.  I’m always looking for more lube to make trying new things out easier.  Static binaries ease the friction.

After playing around with the switch I finally got something working so I thought I’d share it.  The code I developed is on my sticky pipe project.  For the TL;DR version of this post check out the function getNXAPIData for the working stuff.  The rest of this will walk through making a call to the switch.

1.  Get the parameters.

You’ll have to figure out a way to get the username and password from the user.  In my program I used environment variables, but you may also want to take command line variables.  There are lots of places on the internet you can find that so I’m not going into that with much detail other than something simple like:

2. Create the message to send to the Nexus

The NX-API isn’t a RESTful API.  Instead, you just enter Nexus commands like you would if you were on the command line.  The  NX-API then responds with output back in JSON notation.  You can also get XML, but why in the world would you do that to yourself?  XML was cool like 10 years ago, but let’s move on people!  There’s a JSON RPC format, but I don’t get what this gives you that JSON doesn’t other than order by adding flags to order things.  Stick with JSON and your life will not suck.

Here’s how we do that:

This format of a message seems to handle any JSON that we want to throw at the Nexus.  This is really all you need to send your go robots forth to manage your world.

3.  Connect to the Nexus Switch

I start by creating an http.NewRequest.  The parameters are

  • POST – This is the type of HTTP request I’m sending
  • The switch – This needs to be either http or https (I haven’t tried with https yet).  Then the switch IP address (or hostname) has to be terminated with the /ins directory.  See the example below.
  • The body of the POST request.  This is the bytes.NewBuffer(jsonStr) that we created in the previous step.

After checking for errors, now we need to set some headers, including the credentials.

This header also tells us that we are talking about JSON data.

Finally, we make the request and check for errors, etc:

That last line with the defer statement closes the connection after we leave this function.  Launching this should actually get you the command executed that you are looking to do.  Closing is important cause if you’re banging that switch a lot, you don’t want zombie connections blocking your stack. Let him that readeth understand.

Step 4: Parse the Output

At this point, we should be able to get something to talk to the switch and have all kinds of stuff show up in the rest.Body.  You can see the raw output with something like the following:

But most likely you’ll want to get the information from the JSON output.  To do that I created a few structures that this command should respond back with nearly every time.  I put those in a separate class called nxapi.  Then I call those from my other functions as will be shown later.  Those structs are:

These structs map with what the NXAPI usually always returns in JSON:

The outputs may also be an array if there are multiple commands entered.  (At least that’s what I saw via the NX-API developer sandbox.   If there is an error then instead of Body for the output you’ll see “clierror”)

So this should be mostly type safe.  It may be better to omit the Body from the type Output struct.

Returning to our main program, we can get the JSON data and load it into a struct where we can parse through it.

In the above command, I’m looking to parse output from the “show version” command.  When I find that the input was the show version command, then I can use the keys from the body to get information from what was returned to us by the switch.  In this case the output will give us the hostname of the switch.

Conclusion

This brief tutorial left out all the go routines and other fanciness of Go in order to make it simple.  Once you have this part, you are ready to write a serious monitoring tool or configuration tool.  Armed with this, you can now make calls to the NX-API using Go.  Do me a favor and let me know on twitter if this was useful to you!  Thanks!