Sailing the seas of Crypto Currencies: Sharks and Landmines

I’ve been into Bitcoin and Crypto currencies now since late 2013.  One of the fun things I did was back in 2015 was invest $100 (at the time) of Bitcoin into a presale funding project called Augur.  The idea with Augur was to be a prediction market.  It was one of the first of what has become known as ICOs (Initial Coin Offerings) which are like IPOs but just give you tokens.  In the case of Augur you get REP which is reputation.  The project isn’t live yet, but will allow people to predict what will happen then people with REP can say what really happened and verify that outcome.  Sort of like gambling I suppose.  (Which if you are investing anything in crypto you are basically gambling, so if you take my advice, only invest what you can lose, cause you could lose it all.)

Anyway, I didn’t think much of Augur and let it just go on but then with all the hype that crypto currencies have been generating lately I thought:  What the heck, let’s see what is going on with this project.  First of all, what I realized is that everyone and their grandma is creating an ICO and really the whole basis of Ethereum is to help people create their own ICOs.  I still don’t know of a killer app that has come out of it, but maybe its cause I’m still in the dark.  My own belief is that this world is still very very early and much work has to be done before it really gets to the masses.

I had forgotten that when I had registered for Augur, it put it in an ethereum account.  I had also forgotten where that was.  So after looking around a bit, I saw that it was on myetherwallet.com and my address was

0xa0F5b92cb76372957A866660893491234485C0a5

Going to that website you can see the transactions that have since occured, but looking at the token transfers, you can see that $100 allowed me to get 158.3 Augur tokens.  Before July 11th, there were no ETH in there, just REP.

All was well and good, and then to keep more tabs on what was going on, I joined the Augur.net slack channel.  There I got a nice message that I should enable 2 Factor Authentication on my Ethereum account.  Well that is a nice reminder!  (So I thought).  So stupidly I clicked on the link and it took me to a site that looked just like myetherwallet.com, only this place was https://myetherwallet.com.de <- DO NOT GO HERE!  Phishing site!

After entering my private key and getting an error menu, I realized what I had done!  I’d been phished!  But were the tokens still there?

Going to the real site of https://myetherwallet.com I saw that they were.  What a relief!  Now I needed to generate a new wallet and move the REP over to there before the phishing nasties did it.  When I tried, I realized I didn’t have any Ethereum in the account necessary to generate the transaction.  Dang it!  So I moved some from another account into that account to fund it.  After stumbling around that for a little bit, I next moved all the REP over.  Once that was confirmed I moved the last of the ETH out of the account.

At this point the tokens seem to be safe but that was pretty stressful for a bit there!

 

Serverless Computing: How did we get to now?

This is a story of the state of where we are in the world of containers, serverless, and whatever else you want to call this mess.

The story involves 3 groups of people with their own passions, opinions, and modes of getting stuff done.  We’re getting to a point where they are no starting to see things the same way (or getting closer).  That is the real exciting part about where we are today.

The people in this story are:

  • The Infrastructure people running apps.
  • Developers writing backend enterprise and other cool things for the cloud.  SaaS developers?
  • The Mobile Developers.

This story talks about how all their paths collided and have created the jumbled mess of glory that we have today.

Infrastructure People

This group of people used to be called System Administrators back when I was a lad.  But that is so uncool.  They now call themselves Full Stack Engineers, or Site Reliability Engineers, which basically means they are system administrators that know how to write code.  Most of the good system administrators I used to work with assumed you did write gnarly bash scripts back in the day but apparently that practice was forgotten so now that is back in vogue again a job description needed to change.  We don’t want point and click administrators, we want hacker administrators that can work on our full stack.  Whatever.

In the beginning there was your data center, or place where you hosted your machines.

Then came the cloud.  And the cloud was vague.  Larry Ellison saw the cloud and said it was jibberish, made no sense, and made a chauvinistic comment about women’s fashion.  Anyway.

The analysts came, and said the cloud was actually 3 things:  IaaS, PaaS, and SaaS.  With IaaS, you did everything but the hardware and with SaaS you just consumed the software.  PaaS was a strange beast in the middle that was never really defined.  People would just say:  “You know, like Heroku, or Beanstalk”.  They were people with opinions telling you they could take your IaaS to the next level.  But it was still weird, and no matter what you can say, it was vague.  Sure, NIST got involved and cleared all these definitions up, but there’s still a lot of wiggle room into what a PaaS was.

Then came Docker in 2013.  Docker technology wasn’t new.  It was just nicely packaged.  With a cute friendly whale.  Docker actually started as a T-Shirt company, but then took Linux namespaces, cgroups, and a union filesystem and made it fun to work with.

But people said:  It’s hard to manage all these containers.  Cause if I have on container on a VM, no problem. But what if I have 4?  And multiple servers running multiple containers.  Container sprawl! Port sprawl!  Agh!

So other people said:  We will take care of this problem!  So they cobbled existing open source tools together to run it. You can use Mesos, Marathon, Consul, etc… Ugh..

Meanwhile Docker said:  Hey, we’re still here!  We created swarm to run it on multiple nodes.  Hurray for Docker!

Then in 2014 Google says:  You know, we’ve actually been running containers since forever and we know how to do it pretty well.  Anyway, we noticed that we’re having a hard time getting people to notice our superior cloud platform.  It is better than Amazon’s in every way except you’re not as smart enough to see why.  Typical you.  Anyway, here is something called Kubernetes that will hopefully get people to notice our cloud, it is our gift… sort of.

And Kubernetes is awesome.  And people were like:  Wow, this is how we can all manage containers.  So people jumped on that.  A community was born!  People complained:  Docker is too restrictive!  It won’t accept my pull request!  It’s too monolithic.  Whatever.  Poor Docker.

So then the old PaaS vendors with their opinions changed and produced another opinion:  You know, Kubernetes is a project, so if you want to run it the best way, run it on our project.  And so they started having opinions about Kubernetes.  And OpenShift, Cloud Foundry, Apprenda, Tectonic, Rancher, etc all offered this to you for a reasonable price and a chance to feel like you were one of the cool kids.

Meanwhile Docker said:  We have our own product called Docker Datacenter.  And oh.. Kubernetes does that?  Ok, we’ll add that in.  And we are also very secure!

So that’s where the infrastructure people are at right now.  PaaS is basically container stuff.  Nothing else really matters.

Developers

These cool companies that had been around for a while that were perhaps “Born in the Cloud” started saying:  You know, this IaaS stuff is working pretty cool for us, but we have jobs that do other things.  You know, solving the real problems that plague society like Silicon Valley is known to do today:  How can I spy on my old girlfriend from High School?  How can I tell people that I’m having a rough day?  How can I exploit Taxi drivers and then replace them with machines some day?  These are the issues people.

So let’s imagine that someone uploads a photo to our super amazing site that let’s you share photos “with people that you care about”.  They don’t want to maintain VMs for this.  They want something like PaaS but they don’t want to manage containers.  So Amazon says in November 2014:  Hey:  We have this thing called Lambda and it just executes functions in responses to events.  So if you upload a photo, it will call this function.  We’ll package your function and run it on a container and we’ll manage it all for you. Magical!

Pause here and let us all praise Amazon:  Oh AWS, you are so magical, so innovative, so insanely focused on your customers!  How shall we praise thee?  Selah!

Developers love it cause now we can write entire applications without creating virtual machines!  Wahoo!  We’ve finally freed ourselves from the shackles of the operating system!  No more patching.  Its all the responsibility of the cloud providers.

Cloud providers are happy to provide it to you because now you will use more of their services (we’ve got you trapped!) and free up our VMs that you customers were idling waisting away anyway.

Mobile Developers

But it turns out that AWS wasn’t really listening to their customers as fast as you would think.  It was only until some other threats started to emerge.  Kubdos to AWS for being aware of the threats.  Some companies I’ve worked for haven’t been as astute.  Back when we started mobile development a two-person shop would start working on the next killer mobile app.  This was about 2008 and the mobile developers would spend time working on the front end and making it all work awesome.  But then they started to realize:  Hey, we could do a lot more cool things (like track our customers and steal their privacy) if we could upload this app information to a cloud platform.  But back in those days they couldn’t afford a system administrator (oh sorry, full stack reliability engineer) so what were they to do?

Two great companies were formed to solve this problem and others have emerged.  Parse.com and Firebase were created in 2011.  Parse was bought by Facebook and Firebase by Google.  These companies offered a dashboard to mobile app developers that basically offered SaaS to developers.  These services back then were called BackEnd as a Services.  And what more is serverless than creating an application that runs in the cloud?  Function as a service is just the glue that combines the other elements of the backend.  So in a way, the mobile app developers created serverless.  Right?

 

Today

Where AWS and now others have it right is that those serverless systems can go faster because they use containers underneath.  You see, Serverless is the combination of all the developments of these different players.  Their needs, passions, and desires, all being fulfilled and packaged in a grand thing called Serverless.

Serverless today has a couple of characteristics that make it great:

  • You don’t have to manage operating systems (like IaaS, or Container as a Services (the new PaaS?))
  • You pay by the transactions instead of by the hour like IaaS
  • You buy into a bigger ecosystem of applications that are written for app developers:  A database, an identity service, a notification service, an object storage service… Function as a service is a way to tie those together.

The cost is cheaper for everyone, the velocity is vigorous, and the enjoyment is beyond euphoria.

 

Getting around Terraform Indexing

Terraform has some good ways to do interpolation that can be simple.  However, I think as I’ve tried to make Terraform do more of what Ansible does, which might be outside its scope, I run into issues.

One issue I had was creating a list of all the servers in the cluster that I wanted to define static routes for.  To do this I used Terraform’s formatlist built in function.  The issue I had was that I couldn’t get a good index on it.  I searched and found this issue.  Seems I’m not the only one with an issue!  So that is comforting.  So I fought with it for a few minutes then finally went to bed exhausted.  This morning I woke up renewed and thought of a great plan!  I love how your mind works while you sleep.

I defined my compute node with a metadata to keep the count!

Notice in the metadata section above where I give each node a worker_number, which just corresponds to the count.

Later, where I’m going through and creating a template with a list of all the servers I used this bad boy variable to give me the iteration number:

This works great and gives me the count that I need!

Clojurescript (CLJS) – ReactJS, Re-Frame, and Reagent

I’ve been using Re-frame pretty extensively these last few weeks as we begin to work on the front end of our horrendously stylized project called Pipeline.  It has been pretty tough to get the hang of but after being stubborn enough not to change my mind once going down this road I’m really starting to come around … and dare I say, actually understand how to make it work!

What’s the Big Idea?

There are a few components to what I’m talking about that I’ll go over below.  The big idea though is that web applications today are a big mess of javascript, html, and CSS that is somehow stitched together.  Most of todays web applications aspire to be Single Page Applications (or SPA if you are into that whole brevity thing).  Several frameworks have emerged with the (arguably) popular ones being Angular.js (from Google) and React.js (from Facebook).  The big picture though is:  How do you create a great Web front end to your application?  These frameworks typically call the APIs of a web backend which (hopefully) would be the same backend that a mobile application would call.

Clojurescript

Clojurescript is a LISP language.  It’s basically the same language/syntax as Clojure but where as Clojure compiles down to Java, Clojurescript compiles down to Javascript.  Why would you want that?  Seems like overhead.  Well, the claim is that the Google closure compiler (confusing name) is much more efficient and can make it faster.

What is interesting to me as I go down this route is that most of the blog posts are from 2014 and earlier and I can’t tell if I’m pursuing a path that is no longer cool or what.  This is surprising to me because with the greater popularity of React these days this seems like a great path.

The downside to clojurescript is the stack traces are difficult to comb through and understanding the whole immutability thing can be a struggle.

React

React is Facebook’s Javascript library for creating user interfaces.  The idea is that there is a virtual dom and it listens to changes in data (binds to data) then will respond immediately to the changes.  It’s funny that after using the library through Reagent I still don’t really understand React.  To me its just the data changes and the web page is immediately updated.

It should be noted that React isn’t a full on featured library like Angular so some people say its unfair to compare them.  But I tend to like this blog’s argument of why you can.  And in my limited knowledge of things in the world am easily persuaded by his post.  (Or should I say, enjoy that he has validated my decision).  I especially like the “unix philosophy” argument that takes the position of: “Your tool should do one thing very well”.

Reagent

So you have React and you have Clojurescript and you want to make awesome happen.  So naturally people create frameworks to make this happen for you.  Om, Rum, and some others emerge and I think are still out there.  I settled on Reagent though cause it seemed pretty simple and easier for me to wrap my head around.  Reagent uses the hiccup syntax so it gets rid of having to do html tags and makes it easier to see.  When I used Ruby on Rails I would use HAML which was the same concept.

The result is that now when you are writing the application everything is in one language!  There’s no sprinking of HTML and not mixing with Javascript, its all just Clojurescript!  This is the thing I most love about it.  Its more akin to me like designing an iOS application in swift where there is one language to do it all.

The other powerful aspect of Reagent is that it is functional.  Functional Reactive Programming (or FRP) means that interfaces respond to the data.  Or, “everything is data derived”.  There’s no ngThis or ngThat, it’s just functional stuff.

The issue with Reagent (or lack of design feature) is that it is silent on how to store data and interact with data (client side) as well as serverside.  This is where the grand daddy of them all comes into play:  Re-Frame

Re-Frame

I started with examining the README file on the Re-Frame project and was immediately blown away.  Yes!  This is the most epic readme of all projects.  The examples and docs were pretty clear and quite glorious.  I loved the opinionated nature of it and the humor.  (not the best reason to pick a framework, but not bad!)

Re-frame is a framework that works with Reagent to create the idea of a clientside database.  (Or a database that lives in the browswer).  It makes things fast and helps with state changes.  It has the idea of subscribing to changes or dispatching changes to the database realizing that everything can be asyncronous but everything responds to changes in the data.  Data all the way down.

An Example

I put an example on github that you can see if you want to run for yourself.

The idea is that you have a list of names and you want to filter them in the search bar.  Here’s the main entry code that is called from the main reagent code that initializes this home page.

I have several bootstrap tags in there that I took out of the code.  The tricky thing is to notice the reagent r/atom which is the data that changes.  This code doesn’t have any re-frame code to it.  Basically it displays a form.  Notice the code at the bottom to call display-names.  That code is where the filtering happens.  When the input is changed, the search-string changes.  You never have to re-call display names as its always called and responds to changes in data.  So great!

Above is the display names code that takes a list of names (defined outside the function in the def (or variable ) called “names”.  It goes through and prints a name if it matches the filter.

Leadership

I’ve had the pleasure of meeting Adrian Cockcroft of Battery Ventures a few brief moments over the course of my career.  The first time was when he worked for eBay and I worked for IBM and we were doing a GPFS proof of concept.  He was a really nice person (and still is!) and introduced me to LinkedIn.  In fact he was my first connection!  The next thing I knew, years later he had done incredible things at Netflix and continues to do so at Battery Ventures.  He’s said a lot of cool things through the years, but the one story that has been the most impactful to me was his story of the reaction of a Fortune 100 CTO that approached him after he gave a talk.

Adrian’s talk I imagine was about all the cool DevOps things Netflix was doing.  The CTO came up to him afterwards and his reaction was:  “But Netflix has a superstar development team, we don’t!”

Adrian’s response was classic:  “Well, we hired these guys from you”.

The lesson I take from this is the impact that great leadership can have and conversely how poor leadership can squander opportunities and waste millions of dollars.

I’ve been on many teams in my career.  I’ve been really lucky to have been on some of the best.  The various teams I’m a part of now at Cisco are first class.  But there have been times when I’ve thought:  If we only had a better team or people that understood more about X or Y, then just imagine what we can do.

Given what Adrian’s story illustrates, I am changing my thinking to believe that I am already on the best team where ever I go.  The potential for greatness that is there is unlimited.  Everyone I work with really wants to do a good job.  We’re driven, passionate and want to be the best.  When there are issues, leadership from within the team can surmount those obstacles and make the team better.

I imagine somewhere there are executives in big companies trying to figure out a market they haven’t been able to crack.  And they might be thinking:  If only we had top developers who could get us there.  Then they might think: Maybe we should acquire a company that has top developers and they can get us there.

Well, even though they’re not reading this, I would say to them:  You already have some of the best people working for you and all the resources at your disposal.  There’s actually another problem here, and it isn’t your people.

And if you disagree with decisions that have been made, you can lead from within and be the leader you want to see.  Even if the only one you’re leading is yourself.

Drone Secrets

I was really happy to see the Drone Secret’s page describe how to put secrets in a .drone.yml file.  Checking passwords into repositories is a big no-no.

Still, there were some clarity in the docs I would have liked.  Here’s step by step.

1. Install drone

Yep. This is the mac client.  I did the manual way

2.  Set environment variables

You have Drone up, set the following in your .bash_profile or .login

The DRONE_TOKEN you can get by logging into drone and clicking on your profile.  The settings area has that.

3.  Create the secrets.yml file as shown in the docs.

4.  Convert and check in!

5.  Secrets can be accessed in the .drone.yml file with the $${VARIABLE}

The example below shows the QUAY_PASSWD variable.

 

Drone on CoreOS for CI/CD

I’m working on a moving my CI/CD platform to Drone.  As usual, I’m behind a corporate firewall

1. Install CoreOS

This is just a standard CoreOS image on OpenStack

2. Configure Proxy

Now update to get latest settings so Docker works

3. Get Drone

4. Register application in your github account.

Since I’m using an enterprise version of github, I went into the organization and created an oath account

githuboath

5. Create a configuration file with the environment

I created /vol/dronerc.  My /vol directory is a persistent storage volume mounted to my vm.  The contents are as follows:

6. Start up Drone

Now we can go to our page and open up our Drone app.

You can see that I mapped the default 8000 port to port 80 so that I can access it directly from my server, which is hosted internally at 10.93.234.142

Jumphost SSH config

Oh, you have a jumphost? And then you want to get to another server that is behind that server on a private network?  If both servers are configured with the same key this can be accomplished by putting an SSH config file entry like the one below:

To get to the amster server I now just run:

It runs through the jump.amster server and instant access.

Cassandra Startup with Docker and Golang

I use Docker:

 

(Notice that using boot2docker or docker machine on virtual box doesn’t work very well because the directories are created under different user name.  You have two choices:  Don’t mount persistent directories with boot2docker or log into the virtual box guest (docker-machine ssh dev) and launch the container there.)

Make sure its up:

 

Now lets create a Keyspace (or Database in SQL terms).  Connect to it by connecting to cassandra where the 9042 port is mapped:

Here we see that port 9042 is mapped to 32769.  To connect we run:

Now we can create the keyspace:

 

Now let’s use it with a Go program.

Our code is pretty simple:

If that worked then you created an entry and go data back.  You are now off to the races writing golang code to a cassandra database.

TODO:

  1. Scale out to multiple cassandra servers
  2. Show how this works in mantil.io

Ethereum Contracts

I’ve been working with Eris to manipulate “smart contracts” (smart contracts are neither smart, nor are they contracts.  They’re just code written to the blockchain that stores values that can be changed with the right permissions).

Eris has a pretty good ‘eris-by example’ script that shows how you can do a lot of it with curl.  My object was to not use the Eris tools but make it a bit more generic for use with any Ethereum contract.

I’ll gloss over creating the contract, but essentially, I just created a quick contract called ‘echo’ and compiled it into the solidity compiler. (Yes I know it moved by the new link didn’t work for me )

My code is pretty simple and borrowed from the millions of other echo “hello world” like contracts where it stores a string and an integer:

After submitting the transaction I get addresses as to where it was submitted.  I’ll gloss over that detail as I’m interested in the other part of this and that is how to interact with the data.

The solidity compiler shows that when it compiles the functions to javascript byte code it creates hashes of the function names themselves.  The documentation shows that to call a function you do it by calling its name ‘derived as the first 4 bytes of the Keccak hash of the ASCII form of the signature’.  Luckily the solidity compiler gives this value to you.

Now you just need to find the address of where the function lives.

In my app I can now call:

Let’s break this down.  The from Address is one of the validators that I created.  The ‘toAddress’ is the address of where the contract lives.  To get the value amount from this we found from the solidity compiler that the function was called by entering in ‘d321fe29’.  That then returns the following:

Our value is encoded in hex, and everyone knows that 64 is actually just 100.

We can do the same thing to get the string value calling the get_s function:

Using the above command but subsituting in 75d74f39 we get the following:

To convert the returned hex into string we look at the output.  The first 32 bit string shows a 2.  This means that the data comprises the next 2 32 byte strings.  Decoding the last part:

Nice message.

Now how to change this value?  If we had a set_amount function we would just add the value after the function.  Since we only have a set string we have to do it dynamically.  In this case figure out what our string will be then pad it with enough 0s.  Strings are ‘dynamic types’ as specified in the documentation.  So we have to add a few more fields.  First the function:

Let’s first make a string:

This will fit in one 64 byte string so we add the extra 0s to the end and add the 64 byte string to tell us the offset for it.  The argument to the function then looks as follows:

Adding this to the one command:

The output will have changed nothing.  Why?  From the README we read that:

” It is possible to “query” contracts using the /call endpoint.  Such queries are only “simulated calls”, in that there is no transaction (or signature) required, and hence they have no effect on the blockchain state.”

So to change this for reals we need to actually sign our call.  I tried several methods at present but eris only returns garbage.  For example here’s a command I ran:

The output was garbled guck.  At this point I’m stuck on this effort but thought I would at least show where I am.  Using the JSON RPC is perhaps too low level and ideally you would use the Eris javascript library (eris-contracts)  My hope instead was to use the RPC API to accomplish this.  Perhaps more to come!