Jenkins Spark integration

You can make Jenkins publish messages to spark after or during builds.  Here’s how we do this:

1. Get the spark room ID that you want to use:

This will give you a list of rooms.  From here, you can find the room you want and get the ID of the room: Something like: Y2lzY29zcGFtxuovL3VzL1JPT00vOWRhMjY1MDAtOWY2Zi0xMWU1LTg0ODQtNzczOTMxZTUxMGE3

2.  In jenkins we can now notify the room of a build before the build or after the build by using the execute command plugin.  Putting a simple curl command like the below will create the command necessary to notify the spark room:

Just substitute in the roomId and Authorization token.  You can also create a person using a different email account to make a ‘jenkins user’.

Installing Eris

Eris Industries is another provider of blockchain services and smart contracts.  I had a few issues installing it on Ubuntu even though I followed the guide, so I wanted to outline here how I did it.  The doc will be somewhat terse, but will have all the commands I ran.

Base Operating System

I’m running mine on Ubuntu 14.04.  I did this on an Internal OpenStack cloud provided by Cisco. I used 4 servers and provided them with a floating IP address.  I should have done an ansible playbook for this and will do this in the future.

Installation Steps

Update the OS or nothing will work!

Install Docker

Update the sources list create

The contents should be:

Next run the following:

Run

To make sure that it works.  Lastly, make sure the current user can control  docker by adding him to the docker group:

Install Go (Golang)

You’ll now need to add the following to your ~/.profile file at the end:

You may need to log out and log back in to make sure that the docker group is set for the next step.

Install Eris

This is done by issuing the following commands

If you had trouble with this step, make sure that you can run docker with the current user (e.g: no sudo required, see last part of Install Docker section above).  If you have problems with GOPATH, make sure it is set and exported as shown in previous section.

Now you can start rolling your chains!

 

Jenkins running in Docker image behind firewall

When you run jenkins behind a firewall you need it to get out.  You’ll have to set up proxies to let this happen.  Here’s how I make it run:

First, make coreos able to go out of your network:

Next edit http-proxy.conf and make it look like:

This will allow docker to get outside the firewall.

Next restart docker:

Now, on to Jenkins.  Grab Jenkins:

Make a persistent directory to store settings (should be a persistent volume mount)

Now run the container with the following JAVA_OPTS flag as documented here.

You’ll obviously want to use your proxy server instead of mine!

You should now be able to install all the plugins you need!  Hurray!

Ethereum + Docker + Terraform + Packer

I created a quick way to get a private ethereum cluster up.  I’ll be presenting some of this at Cisco Live Berlin in my Dev Net session I’ll be doing with Tom Davies.  All of the information is available at my github repo.

The high level overview, is we’ll want a way to bring up a private Ethereum chain to do our own testing.  I seeded the account with lots of ether so that we can write an unlimited number of transactions to our chain.  I’ll be exploring smart contracts in my next post but for now on, please check out the instructions at the github site and let me know how you would make it better!

https://github.com/vallard/ethereum-infrastructure

It’s just lube

When looking to optimize your application deployment cycle organizations often turn to buzzwords such as: CI/CD (Continuous Integration/Continuous Delivery), DevOps and invariably the different products/projects they’ve heard of that do this.  In this buzzword blitz its a bit easy to get confused as to what does what.  Today we are hearing: Jenkins, Kubernetes, Mesos, Ansible, Terraform, Jenkins, Travis CI, Github, etc.  For people that come from a systems background with no development experience this can be a bit daunting.

As I’ve spoken to enterprise IT organizations as well as internally to Cisco account managers I can’t help but think how true the diagram that Simon Wardley created is (also, watch this great video from OSCON 2015):

Most enterprise customers are near, at, or past the last stage and are adopting.  Some faster than others, but all of us know, we need to adapt.  In the midst of this buzzword blitz I thought I would present a slide I’ve been sharing that shows what we are looking for when we talk about agile IT from a developers point of view.

You have to look at it as you would a factory assembly floor.  On one side, you have the raw inputs coming in.  These inputs are shaped into different components so that the end result is something amazing: A new car, an airplane, an iPhone, etc.  All we are trying to do is optimize this assembly process.  That’s the goal of manufacturing and the goal of agile IT.

Putting this in the IT world you have a developer with code as the raw input.  On the other side is some amazing product that delights users.  (Or at least gets them to want to use it and perhaps even pay for it).  In the middle of these two points is friction.  This friction has been bad for a long time, and has really bogged down our application development pipeline.

lube

Agile, DevOps, etc is just lubricant.  These development tools are lube.  All you need is lube.  People go to the cloud because they want more lube.  In the TCO calculations I’ve done, it seems to always work out that using a public cloud is more expensive.  People know this.  So why do they go there? The two main reasons:

  1. Speed (more lube)
  2. Capabilities (more lube)

The issue is there isn’t enough lubricant in IT organizations.  Developers and the IT organizations have two very conflicting goals.  The developers want to create more instability by adding features and trying new systems.  On the other end, IT organizations want to create more stability.  (see the book Lean Enterprise for more discussion on this)

The other issue is that developers have won the battle of IT vs. Developers.  Business knows they need to digitize to become relevant and stay competitive and they need creative developers to do that.

Going to the cloud, implementing agile IT methodologies is just lube to get things done faster.  When people start company-dropping (like name-dropping but with new companies you may not have heard of) then you just have to remember its lube.  You look at the pipeline and ask them based on the girl and the elephant picture above:  How does this new product/project/company provide lube?    From there its not to hard to understand how one of these things you may not be familiar with works.

 

Cisco Spark APIs

I’m super excited that Cisco Spark finally gave us some APIs that we can develop into 3rd party apps.  As I’ve been doing more development and have several projects I need to do, we’ve been using Cisco Spark for collaboration.  Not having APIs was the one thing keeping me off of it for several projects.

Today I started doing the DevNet tutorials but as I already know how to work with APIs I didn’t want to just do the postman stuff so I started writing some libraries in Go.  I posted what I’ve done so far up on Github.  There’s not too much there yet, but as I start to integrate things I expect I’ll have more do do on this.

One thing that has caused frustration with me is the inability to delete chat rooms.  I have one room that’s up that I can’t even log in to see what it is.  I wish this room would just go away.  I’m thinking this may be user error but wish I could figure it out easier.

Finally, I should mention the DevNet tutorials are excellent to work with.  They have a promotion now of giving away Github credits if you finish the 3 tutorials.  I can’t wait to see what happens next :-)

Finally Finally:  If you have Jenkins integration with Spark already done, please let me know!  I’d like to use it!

 

AWS Re:Invent 2015

AWS was an amazing conference.  All of my notes of the events I went to are here. (Scroll down to read the README.md file)

Just some quick overall thoughts:

1.  Compared to Cisco, AWS really skips out on the food and entertainment.  I mean, come on, we had Aerosmith at Cisco Live and AWS gives us what?  I can’t even remember the name.  Doesn’t really matter, cause I went home that night anyway.

2.  This should be a longer event.  There were too many sessions I wanted to attend.  I was fortunate enough to attend an IoT bootcamp and that could have easily gone another day if they would have added some analysis.  I wish it would have.

3.  The announcements never stopped.  I lost count around 20, but there were a ton of new features and services.  Take Amazon snowball:  $200 to send 50TB into AWS.  Best comment on that?  Costs $1500 to move it out.  (50,000 * $0.03)

4.  The biggest surprise to me was hearing the amount of customers that use the Cisco CSR1000v.  It’s not my product to know, so I don’t feel bad saying this.  I didn’t think there were so many users of it!  Wow.  The use case was “Transitive Routing”.  Imagine having 3 VPCs.  One of them is externally connected.  Placing one pair of CSR 1000vs in that externally connected VPC allows for the other VPCs to communicate to each other using BGP internally.  Pretty cool.

5.  Everyone is in trouble.  When Amazon QuickSight was announced I thought:  Wow, if you’re into analytics in the cloud you are in trouble. I don’t know which companies may have been effected by that, but I suspect they are the tip of the iceberg.  Take New Relic for example.  Right now they are doing really well for admin analytics.  How long before AWS puts a service to do that?

6.  What I was wondering about is if they were ever going to announce some sort of on-prem solution. The closest they got to that was Amazon Snowball, bless their hearts.  It probably doesn’t make sense for them to complicate things with that and leads to more capability of intellectual property getting loose.  After all, these are linux machines, and if a managed service happened on prem, that would be easy to get into.

7.  Look out Oracle!  Woah, that was some serious swinging.  And Oracle, you have a lot to worry about.  First of all, nobody I talk to really likes you.  People have nostalgic feelings for Sun but I’ve not really talked to people that like Oracle.  Perhaps that’s because I don’t talk to database administrators as much.  But guess what?  Nobody really likes them either.  So you have a hated product ran by hated people.  Probably won’t take long for people to dump that when refresh season comes up.

8.  Lambda.  Last year, AWS introduced Lambda.  I don’t think people still really get how important Lambda is.  Its the glue that makes a serverless architecture in AWS work.  “The easiest server to manage is no server” said Werner Vogels.  This is the real future of the cloud.  Like my previous post on getting rid of the operating system said;  managing operating systems is the last mile.  VMS are a thing of the past.  Even containers are less exciting when you think about a serverless architecture.  Just a place to execute code and APIs to do all the work.  Database, storage, streaming, any service you want is just an API.  Where AWS lambda fails in my book is that its limited to only AWS services.  Imagine if this were available to extend to any cloud service.  That to me would be the real Intercloud Cisco dreams about. As more cloud APIs develop, extending Lambda to an “API Store” is something more people would find value in.  Amazon probably wouldn’t because it means people using non-AWS services.  But this is where I would be investing if I were trying to compete against AWS.  Nothing else seems to be working.

Anyway, that’s my take.  What did you think?

 

Using Packer with Cisco Metapod

Packer is such a cool tool for creating images that are used with any cloud provider.  With the same packer recipe you can create common images on AWS, Digital Ocean, OpenStack, and more!  The Packer documentation already comes with an example of running with Metapod (formerly Metacloud).

I wanted to show an example of how I use Packer to create images for both AWS and OpenStack.  Without any further commentary, here is what it looks like:

There are two sections to this file.  The builders:  AWS and OpenStack, and the provisioners (the code that gets executed on both of them.)  We specify as many different platforms as we have and then tell it what to do.  When we are finished executing this script, our Jenkins slaves will be ready on all of our platforms.  This is super nice because now we just point our robots to which ever system we want and we have automated control!

With Metapod there were a few hooks:

  • By default images are not created with public IP addresses, so you need to grab one for it to work correctly.
  • Packer actually creates a running image but then logs into it via SSH to configure it.  So this is required if you are running this from outside the firewall!

Hybrid Cloud isn’t about moving workloads between two clouds, its about having tools that can operate and deploy and tear down workloads to any cloud.  Packer is a great foundation tool because it creates the same image in both places.

 

It’s time to get rid of the operating system

We’ve abstracted many things you and I.  But its time to crack the next nut.  Let me explain why.

Bare Metal

I started out managing applications on bare metal x86 and AIX servers.  In those days we were updating the operating systems to the latest releases, patching with security updates, and ensuring our dependencies were set up in such a way that the whole system would run.  In a way, its pretty amazing that the whole complex stack operated as well as it did.  So many dependencies, abstracted away so that we didn’t have to worry about the application opening up the token ring port and we could just use a trusted stack to send things on its way.  Life was good, but it sucked at the same time:  It was slow to provision servers, boot times were atrocious especially when UEFI made boot times even longer, and it was monolithic and scary to touch.

I remember one company I visited had their entire business records (not backed up) on one server that was choking under the load.  I came in there and pxebooted that server, ran some partimage magic and migrated the server to another big beefier server.  It was the first cold migration I had ever performed.  I felt like an all powerful sorcerer.  I had intergalactic cosmic powers.  High Fives all around.

Virtualization

I started virtualization with Xen, then KVM.  VMware was something the windows guys were doing, so I wasn’t interested.  Then I realized it was based on Red Hat Linux (it has since changed) so my powers were summoned again. I started helping get application running on Virtual Machines.  I did the same thing:  I installed the operating system to the latest release.  I applied security patches, and then I made sure the latest dependencies were set up in such a way that the application would run.

But here my magical powers were obsolete.  People were desensitized to vMotion demos.  Oh, you had to shut off that machine to migrate it to another server?  We can migrate it while its still running.  Watch this, you won’t even drop a ping.  People started making the case that hardware was old news.  Why do you care which brand it is?

We could make new VMs quickly, more efficiently use our servers, (remember how much people said your power bill would drop?), and maintenance of hardware was easier.

All our problems were solved, as long as we paid VMware our dues.

Cloud IaaS

But VMware was for infrastructure guys.  The new breed of hipsters and brogrammers were like:  I don’t care what my virtualization is, nor my hardware, as long as its got an API, I can spin it up.  So we would start getting our applications working on AWS.  But here we as systems dudes started abusing Ruby to create super cool tools like Puppet or Chef and started preaching the gospel of automation.  And so when I would get applications running on the cloud, I would install the latest operating system, apply security patches, make sure my latest application dependencies were there and then run the operating system.

My magical powers of scripting came back in force.  I was a scripting machine.  Now I didn’t care about the hardware, nor the virtualization platform.  I just got things working.  All the problems were solved as long as I paid my monthly AWS bill.

Adrian Cockcroft told me and thousands of my closest friends at conferences that I didn’t have to worry about the cost, because I needed to optimize for speed.  If I optimized for speed projects would get done ahead of time so I would save money and because my projects were iterating quickly I would make more money.  We took our scripts and fed them to robots like Jenkins so we could try all our experiments.  We would take the brightest minds of our days and instead of having them work out ways to get people to Mars or find alternate power sources we would have them figure out how to get people to click on ads to make us money.  God bless us, everyone.

But we still had to worry about the operating system.

Cloud PaaS

We took a side look at PaaS for a second.  Because they said we wouldn’t have to care about the OS, nor the dependencies because they would manage it for us.  The problem was

1.  Our applications were artistic snowflakes.  We needed our own special libraries and kernel versions.  You think our applications are generic?  Yeah, it was great if we were setting up a wordpress blog, but we’re doing real science here:  We’re getting people to click on ads for A/B testing.  ‘Murica.  So your PaaS wasn’t good enough for us.  And our guys know how to do it better.

2.  We heard that it wouldn’t scale.  Let alone that we were using Python and Ruby that was never really meant to scale into the atrocious applications that became Twitter and others.  Typed languages are so easy, so we used them.

So for our one-offs, we still use PaaS but for the most part, we still install operating systems to the latest versions, install security policies and patches, and ensure our dependencies are up so we can run the applications.

We weren’t supposed to worry about the operating system, but we did.

Containers and Microservices

A cute little whale, handcrafted with love in San Francisco, then stole our hearts by making it easy to do things that we were able to do years ago.  The immutable architecture with loads of services in separate pieces would come our way and save us from the monolith.  A container a day keeps the monolith away is what it says on the T-shirt they were handing out.  I got tons of cool stickers that made me feel like a kid again and I plastered them all over my computer.

I started breaking up monoliths.  At Cisco, our giant ordering tool based on legacy Oracle databases and big iron servers was broken up and each piece was more agile than the next.  We saw benefits.  Mesos and Kubernetes are the answers to managing them and Cisco’s Mantl project will even orchestrate that.  Its really cool actually!

So how do I get a modern micro services application running today?  I create a Dockerfile that has the OS.  Then I do apt-get update to make sure all the dependencies are in place.  I use Mesos or Kubernetes to expose ports for security.  Then I make sure the dependencies are installed in the Operating system in the container.  And we’re off.

Mesos even has something called the Data Center Operating System (DCOS).  It runs containers.  But containers still run Operating Systems.  We’re still worrying about the operating system for our applications!

What’s Next? 

We’re still crafting operating systems.  We’re still managing them.  We’ve started down a journey of abstraction to deliver applications, but we haven’t cracked the final nut:  We need to make the operating system irrelevant, just as we’ve made the hardware and virtualization platform irrelevant.  The things we’re doing with scheduling across other containers used to be something the OS would deliver on a single box, but that’s not happening anymore due to the distributed nature of applications.

AWS has shown us lambda which is a great start in this direction.  Its a system that just executes code.  There’s no operating system, just a configuration of services.  It’s a glimpse into the future of what the new modern day art of the possible will be.  As we start to break down these micro services deeper in to nano services or just function calls, we need to get away from having to worry about an operating system and just a platform that runs the various components of our micro service-ized application.

We’ve gotten leaner and leaner and the people that figure this out first, and give applications the best experience to run without requiring maintenance of operating systems will win the next battle of delivering applications.

We abstracted hardware, the virtualization platform, and our services.  Now its time to go to eleven:  Get rid of the operating system.

A Full Bitcoin Client

I’ve been using bitcoin for a few years now but have only used my own wallets, Coinbase, and some other stuff.  I thought I should make a full client and put it on the network!

I used Metacloud (Cisco OpenStack Private Cloud) and spun up an extra large Ubuntu 14.04 instance that we had!  I logged in and did the following:

From there I looked at the doc/build-unix.md and followed the instructions. They worked perfectly!

Once that is done, you can start the bitcoin client by running:

It will tell you that you need to set a password and will suggest one for you.  Take the suggestion and create the file ~/.bitcoin/bitcoin.conf and copy the password in there.  It might look like:

Now you can start it:

This will take a while to download the blockchain.  You can see all the blocks as they’ll be downloaded to the ~/.bitcoin/blocks directory.

But now we have it!  A node in the bitcoin service!