Jumphost SSH config

Oh, you have a jumphost? And then you want to get to another server that is behind that server on a private network?  If both servers are configured with the same key this can be accomplished by putting an SSH config file entry like the one below:

To get to the amster server I now just run:

It runs through the jump.amster server and instant access.

Cassandra Startup with Docker and Golang

I use Docker:

 

(Notice that using boot2docker or docker machine on virtual box doesn’t work very well because the directories are created under different user name.  You have two choices:  Don’t mount persistent directories with boot2docker or log into the virtual box guest (docker-machine ssh dev) and launch the container there.)

Make sure its up:

 

Now lets create a Keyspace (or Database in SQL terms).  Connect to it by connecting to cassandra where the 9042 port is mapped:

Here we see that port 9042 is mapped to 32769.  To connect we run:

Now we can create the keyspace:

 

Now let’s use it with a Go program.

Our code is pretty simple:

If that worked then you created an entry and go data back.  You are now off to the races writing golang code to a cassandra database.

TODO:

  1. Scale out to multiple cassandra servers
  2. Show how this works in mantil.io

Ethereum Contracts

I’ve been working with Eris to manipulate “smart contracts” (smart contracts are neither smart, nor are they contracts.  They’re just code written to the blockchain that stores values that can be changed with the right permissions).

Eris has a pretty good ‘eris-by example’ script that shows how you can do a lot of it with curl.  My object was to not use the Eris tools but make it a bit more generic for use with any Ethereum contract.

I’ll gloss over creating the contract, but essentially, I just created a quick contract called ‘echo’ and compiled it into the solidity compiler. (Yes I know it moved by the new link didn’t work for me )

My code is pretty simple and borrowed from the millions of other echo “hello world” like contracts where it stores a string and an integer:

After submitting the transaction I get addresses as to where it was submitted.  I’ll gloss over that detail as I’m interested in the other part of this and that is how to interact with the data.

The solidity compiler shows that when it compiles the functions to javascript byte code it creates hashes of the function names themselves.  The documentation shows that to call a function you do it by calling its name ‘derived as the first 4 bytes of the Keccak hash of the ASCII form of the signature’.  Luckily the solidity compiler gives this value to you.

Now you just need to find the address of where the function lives.

In my app I can now call:

Let’s break this down.  The from Address is one of the validators that I created.  The ‘toAddress’ is the address of where the contract lives.  To get the value amount from this we found from the solidity compiler that the function was called by entering in ‘d321fe29’.  That then returns the following:

Our value is encoded in hex, and everyone knows that 64 is actually just 100.

We can do the same thing to get the string value calling the get_s function:

Using the above command but subsituting in 75d74f39 we get the following:

To convert the returned hex into string we look at the output.  The first 32 bit string shows a 2.  This means that the data comprises the next 2 32 byte strings.  Decoding the last part:

Nice message.

Now how to change this value?  If we had a set_amount function we would just add the value after the function.  Since we only have a set string we have to do it dynamically.  In this case figure out what our string will be then pad it with enough 0s.  Strings are ‘dynamic types’ as specified in the documentation.  So we have to add a few more fields.  First the function:

Let’s first make a string:

This will fit in one 64 byte string so we add the extra 0s to the end and add the 64 byte string to tell us the offset for it.  The argument to the function then looks as follows:

Adding this to the one command:

The output will have changed nothing.  Why?  From the README we read that:

” It is possible to “query” contracts using the /call endpoint.  Such queries are only “simulated calls”, in that there is no transaction (or signature) required, and hence they have no effect on the blockchain state.”

So to change this for reals we need to actually sign our call.  I tried several methods at present but eris only returns garbage.  For example here’s a command I ran:

The output was garbled guck.  At this point I’m stuck on this effort but thought I would at least show where I am.  Using the JSON RPC is perhaps too low level and ideally you would use the Eris javascript library (eris-contracts)  My hope instead was to use the RPC API to accomplish this.  Perhaps more to come!

 

 

Jenkins Spark integration

You can make Jenkins publish messages to spark after or during builds.  Here’s how we do this:

1. Get the spark room ID that you want to use:

This will give you a list of rooms.  From here, you can find the room you want and get the ID of the room: Something like: Y2lzY29zcGFtxuovL3VzL1JPT00vOWRhMjY1MDAtOWY2Zi0xMWU1LTg0ODQtNzczOTMxZTUxMGE3

2.  In jenkins we can now notify the room of a build before the build or after the build by using the execute command plugin.  Putting a simple curl command like the below will create the command necessary to notify the spark room:

Just substitute in the roomId and Authorization token.  You can also create a person using a different email account to make a ‘jenkins user’.

Installing Eris

Eris Industries is another provider of blockchain services and smart contracts.  I had a few issues installing it on Ubuntu even though I followed the guide, so I wanted to outline here how I did it.  The doc will be somewhat terse, but will have all the commands I ran.

Base Operating System

I’m running mine on Ubuntu 14.04.  I did this on an Internal OpenStack cloud provided by Cisco. I used 4 servers and provided them with a floating IP address.  I should have done an ansible playbook for this and will do this in the future.

Installation Steps

Update the OS or nothing will work!

Install Docker

Update the sources list create

The contents should be:

Next run the following:

Run

To make sure that it works.  Lastly, make sure the current user can control  docker by adding him to the docker group:

Install Go (Golang)

You’ll now need to add the following to your ~/.profile file at the end:

You may need to log out and log back in to make sure that the docker group is set for the next step.

Install Eris

This is done by issuing the following commands

If you had trouble with this step, make sure that you can run docker with the current user (e.g: no sudo required, see last part of Install Docker section above).  If you have problems with GOPATH, make sure it is set and exported as shown in previous section.

Now you can start rolling your chains!

 

Jenkins running in Docker image behind firewall

When you run jenkins behind a firewall you need it to get out.  You’ll have to set up proxies to let this happen.  Here’s how I make it run:

First, make coreos able to go out of your network:

Next edit http-proxy.conf and make it look like:

This will allow docker to get outside the firewall.

Next restart docker:

Now, on to Jenkins.  Grab Jenkins:

Make a persistent directory to store settings (should be a persistent volume mount)

Now run the container with the following JAVA_OPTS flag as documented here.

You’ll obviously want to use your proxy server instead of mine!

You should now be able to install all the plugins you need!  Hurray!

Ethereum + Docker + Terraform + Packer

I created a quick way to get a private ethereum cluster up.  I’ll be presenting some of this at Cisco Live Berlin in my Dev Net session I’ll be doing with Tom Davies.  All of the information is available at my github repo.

The high level overview, is we’ll want a way to bring up a private Ethereum chain to do our own testing.  I seeded the account with lots of ether so that we can write an unlimited number of transactions to our chain.  I’ll be exploring smart contracts in my next post but for now on, please check out the instructions at the github site and let me know how you would make it better!

https://github.com/vallard/ethereum-infrastructure

It’s just lube

When looking to optimize your application deployment cycle organizations often turn to buzzwords such as: CI/CD (Continuous Integration/Continuous Delivery), DevOps and invariably the different products/projects they’ve heard of that do this.  In this buzzword blitz its a bit easy to get confused as to what does what.  Today we are hearing: Jenkins, Kubernetes, Mesos, Ansible, Terraform, Jenkins, Travis CI, Github, etc.  For people that come from a systems background with no development experience this can be a bit daunting.

As I’ve spoken to enterprise IT organizations as well as internally to Cisco account managers I can’t help but think how true the diagram that Simon Wardley created is (also, watch this great video from OSCON 2015):

Most enterprise customers are near, at, or past the last stage and are adopting.  Some faster than others, but all of us know, we need to adapt.  In the midst of this buzzword blitz I thought I would present a slide I’ve been sharing that shows what we are looking for when we talk about agile IT from a developers point of view.

You have to look at it as you would a factory assembly floor.  On one side, you have the raw inputs coming in.  These inputs are shaped into different components so that the end result is something amazing: A new car, an airplane, an iPhone, etc.  All we are trying to do is optimize this assembly process.  That’s the goal of manufacturing and the goal of agile IT.

Putting this in the IT world you have a developer with code as the raw input.  On the other side is some amazing product that delights users.  (Or at least gets them to want to use it and perhaps even pay for it).  In the middle of these two points is friction.  This friction has been bad for a long time, and has really bogged down our application development pipeline.

lube

Agile, DevOps, etc is just lubricant.  These development tools are lube.  All you need is lube.  People go to the cloud because they want more lube.  In the TCO calculations I’ve done, it seems to always work out that using a public cloud is more expensive.  People know this.  So why do they go there? The two main reasons:

  1. Speed (more lube)
  2. Capabilities (more lube)

The issue is there isn’t enough lubricant in IT organizations.  Developers and the IT organizations have two very conflicting goals.  The developers want to create more instability by adding features and trying new systems.  On the other end, IT organizations want to create more stability.  (see the book Lean Enterprise for more discussion on this)

The other issue is that developers have won the battle of IT vs. Developers.  Business knows they need to digitize to become relevant and stay competitive and they need creative developers to do that.

Going to the cloud, implementing agile IT methodologies is just lube to get things done faster.  When people start company-dropping (like name-dropping but with new companies you may not have heard of) then you just have to remember its lube.  You look at the pipeline and ask them based on the girl and the elephant picture above:  How does this new product/project/company provide lube?    From there its not to hard to understand how one of these things you may not be familiar with works.

 

Cisco Spark APIs

I’m super excited that Cisco Spark finally gave us some APIs that we can develop into 3rd party apps.  As I’ve been doing more development and have several projects I need to do, we’ve been using Cisco Spark for collaboration.  Not having APIs was the one thing keeping me off of it for several projects.

Today I started doing the DevNet tutorials but as I already know how to work with APIs I didn’t want to just do the postman stuff so I started writing some libraries in Go.  I posted what I’ve done so far up on Github.  There’s not too much there yet, but as I start to integrate things I expect I’ll have more do do on this.

One thing that has caused frustration with me is the inability to delete chat rooms.  I have one room that’s up that I can’t even log in to see what it is.  I wish this room would just go away.  I’m thinking this may be user error but wish I could figure it out easier.

Finally, I should mention the DevNet tutorials are excellent to work with.  They have a promotion now of giving away Github credits if you finish the 3 tutorials.  I can’t wait to see what happens next :-)

Finally Finally:  If you have Jenkins integration with Spark already done, please let me know!  I’d like to use it!

 

AWS Re:Invent 2015

AWS was an amazing conference.  All of my notes of the events I went to are here. (Scroll down to read the README.md file)

Just some quick overall thoughts:

1.  Compared to Cisco, AWS really skips out on the food and entertainment.  I mean, come on, we had Aerosmith at Cisco Live and AWS gives us what?  I can’t even remember the name.  Doesn’t really matter, cause I went home that night anyway.

2.  This should be a longer event.  There were too many sessions I wanted to attend.  I was fortunate enough to attend an IoT bootcamp and that could have easily gone another day if they would have added some analysis.  I wish it would have.

3.  The announcements never stopped.  I lost count around 20, but there were a ton of new features and services.  Take Amazon snowball:  $200 to send 50TB into AWS.  Best comment on that?  Costs $1500 to move it out.  (50,000 * $0.03)

4.  The biggest surprise to me was hearing the amount of customers that use the Cisco CSR1000v.  It’s not my product to know, so I don’t feel bad saying this.  I didn’t think there were so many users of it!  Wow.  The use case was “Transitive Routing”.  Imagine having 3 VPCs.  One of them is externally connected.  Placing one pair of CSR 1000vs in that externally connected VPC allows for the other VPCs to communicate to each other using BGP internally.  Pretty cool.

5.  Everyone is in trouble.  When Amazon QuickSight was announced I thought:  Wow, if you’re into analytics in the cloud you are in trouble. I don’t know which companies may have been effected by that, but I suspect they are the tip of the iceberg.  Take New Relic for example.  Right now they are doing really well for admin analytics.  How long before AWS puts a service to do that?

6.  What I was wondering about is if they were ever going to announce some sort of on-prem solution. The closest they got to that was Amazon Snowball, bless their hearts.  It probably doesn’t make sense for them to complicate things with that and leads to more capability of intellectual property getting loose.  After all, these are linux machines, and if a managed service happened on prem, that would be easy to get into.

7.  Look out Oracle!  Woah, that was some serious swinging.  And Oracle, you have a lot to worry about.  First of all, nobody I talk to really likes you.  People have nostalgic feelings for Sun but I’ve not really talked to people that like Oracle.  Perhaps that’s because I don’t talk to database administrators as much.  But guess what?  Nobody really likes them either.  So you have a hated product ran by hated people.  Probably won’t take long for people to dump that when refresh season comes up.

8.  Lambda.  Last year, AWS introduced Lambda.  I don’t think people still really get how important Lambda is.  Its the glue that makes a serverless architecture in AWS work.  “The easiest server to manage is no server” said Werner Vogels.  This is the real future of the cloud.  Like my previous post on getting rid of the operating system said;  managing operating systems is the last mile.  VMS are a thing of the past.  Even containers are less exciting when you think about a serverless architecture.  Just a place to execute code and APIs to do all the work.  Database, storage, streaming, any service you want is just an API.  Where AWS lambda fails in my book is that its limited to only AWS services.  Imagine if this were available to extend to any cloud service.  That to me would be the real Intercloud Cisco dreams about. As more cloud APIs develop, extending Lambda to an “API Store” is something more people would find value in.  Amazon probably wouldn’t because it means people using non-AWS services.  But this is where I would be investing if I were trying to compete against AWS.  Nothing else seems to be working.

Anyway, that’s my take.  What did you think?