Docker based Jupyter Notebook on B200 M5 with Tesla P6

We have a fresh system, let’s install Docker.  Here we are using CentOS 7.5 and logged in as root.

First make sure your date/clock is correct, if not, fix:

Install Docker

Install NVIDIA Docker

Following the README we do:

The output of this command should give you something like the following:

We can now run the python notebook to get started with Tensorflow

This will give us a session.  The session will look something like this:

Take note of the token printed out above.  Yours will look different.  Since this is running on a server, we will open an SSH tunnel to be able to talk to the Jupyter notebook

We can now open our web browser to http://localhost:9999.  This will then redirect us to the :8888 port on the server.  This way we can access our machine.  It will ask us for a token.  Now we can put the token that was outputted when we started the docker container.

After entering the token we can go through the Jupyter Notebook Tensorflow tutorials.

Using Docker is a ton easier than installing everything on this server by hand.  We did omit some of the nvidia driver setup requirements found in my previous post.

References

https://www.garron.me/en/linux/set-time-date-timezone-ntp-linux-shell-gnome-command-line.html

https://medium.com/@lucasrg/using-jupyter-notebook-running-on-a-remote-docker-container-via-ssh-ea2c3ebb9055

https://blog.sicara.com/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d

Hands on with NVIDIA P6 and UCS B200 M5

Just got access to a new UCS B200 M5 blade!  My goal is to create a tensorflow lab on it.  Let’s get cracking!

It was installed with CentOS 7.5

cat /etc/redhat-release

Let’s make sure there is indeed a GPU:

Ok, now we need to get some drivers. We go to NVIDIA’s page, fill out the form and get some drivers for RHEL7.

While waiting for downloads, we make it so we can sudo without a password. Run sudo visudo and edit these lines:

Now let’s install the VNC server and some other packages we’ll need.  This will give us development tools and a remote desktop

Now let’s attach to it… hmm.  we can’t.  Is selinux running?

yep.  Let’s turn that off for now.  We don’t need this.  Use sudo to modify /etc/sysconfig/selinux

Have to reboot.  But first let’s install the driver:

edit /etc/default/grub to disable the nouvea driver.

load the new grub file

Add the nouvea driver to the blacklist by appending to (or creating in my case) /etc/modprobe.d/blacklist.conf

Back up the old stuff and make the new initrd

Now we reboot.

I installed the the 32 bit compatible libraries because diskspace is cheap and time is short.

CUDA Libraries

We want tensorflow with the CUDA libraries.  It makes tensorflow fast!  We get it by navigating to their page.  I downloaded the runtime one.

I answer the questions as follows:

Since all went well you will see the following output

(or at least something similar)

Now lets get the environment setup.  Append to ~/.bash_profile

(I’m using 9.2 as this is the version of cuda I’ve installed, it may be different when you install so change as updates come available.)

cuDNN Library

To download the cuDNN libraries you need to have an NVIDIA developer account.  You’ll have to login to the download site and download the Linux version

Here we are cuDNN v 7.2.1 with CUDA 9.2 as that is the library we used.

 

Installing Tensorflow

Install pip

Now we can install tensorflow:

 

References

https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/tensorflow/

http://developer.download.nvidia.com/compute/cuda/6_5/rel/docs/CUDA_Getting_Started_Linux.pdf

https://blog.sicara.com/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d

https://gist.github.com/lyastro/26e0cd8245bcf64914857dd5e8445724

 

Installing NVIDIA Drivers on RHEL or CentOS 7

Ubuntu 18.04 Jump server setup

In my environment I have limited IP addresses, so we’re creating a new network and then allowing one server, the jump server, to sit between these network.  To do this, my jump server, aka: jump104 is “dual-homed”.  This means it has two network adapters:  One on the public network, and one on the internal private network.  We are going to make this server a:

  • DHCP server to the 104 net
  • Router to the 104 net
  • DNS server to the 104 net
  • VNC Server to view things on the private network

Network Adapters

First, we install a brand new Ubuntu 18.04 operating system.  When I first set it up, I only had one interface which was configured correctly for the public network.  Now I need to modify the network to add the second network configuration.  This is done by editing /etc/netplan/01-netcfg.yaml

We add another stanza below what is already there:

To make sure that we can route traffic from the 104 net, we have to add some rules.  This is called IP masquerading or setting up a NAT service.

First, edit /etc/sysctl.d/99-sysctl.conf and uncomment

Then run

Next add the masquerading rules

Here, the -s is the source of the internal network: 10.99.104.0/24.  The -o is the Internet facing interface, so for my setup it is the ens160

DNS

Next up, we need to make it a DNS slave.

In my setup we are creating a zone called “ccp.cisco.com”   We have to modify and add a few files.

/etc/bind/named.conf.local

Above we added two stanzas.  First our new domain and where to do lookups and changes and second the reverse zone:  Where to get names from IP addresses.

/etc/bind/named.conf.options

Here we make sure to only listen on our private interface for queries and only allow queries from addresses in our private network.  We also specify that any DNS query that we don’t know about (most of them) be forwarded to the master DNS service which can be directed through this server as well.

/etc/bind/zones

Now we add the addresses to the zone files we listed above.

db.10.99.104

db.sjc.kubam.cisco.com

Let’s check that we did it right with:

/etc/default/bind9

We are only serving on IPv4, so add the -4 flag to the options

Once done we can now restart the DNS server and apply changes:

 

DHCP

DHCP is used for dolling out IP addresses to unsuspecting servers that come on the network.  This makes setting up IP addressing easy for VMs that pop up in our data center.

Now we add the DHCP range.  Here we want to create a dynamic range from 10.99.104.100-10.99.104.254.  By editing the /etc/dhcp/dhcpd.conf file we can make this happen:

But we want to be sure we only listen and respond to DHCP requests on the internal facing network interface.  This is done by editing the /etc/default/isc-dhcp-server

Since after running ifconfig I see that my internal interface is ens192, I update this file to look as follows:

Since I’m not serving up DHCP for IPV6 then I just leave that blank.  To make all these changes take effect I now run:

It’s funny that I haven’t done this type of configuration since 2005 but some things haven’t changed all that much.

 

VNC Server

Being that all the stuff we want is behind a network we can’t reach, we need GUI tools to access the services.  In my case I’m installing Cisco Container Platform which requires that I can open a browser up to the IP address of the virtual machine behind this network.  I can accomplish this by installing VNC and Firefox.  I remember doing this once while installing vSphere many years ago and getting to the very end only to discover that I needed Flash and that Flash was not supported on Linux at the time.  Those days are gone and you can do everything now without Windows.  This makes me very happy.

From here I can start it up by simply running vncserver.  This opens up port 5901 and writes some configuration information to our ~/.vnc/xstartup.  We can customize it to look as follows:

Start it with:

We are now ready to roll and get this private network all the goodness that it needs.

Sources:

  • https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-18-04
  • https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-18-04

CI/CD continued: Drone

In my previous post I showed how we setup a CI/CD server.  In this post we’ll go into more detail in explaining how drone works.  I’ve previous written about how drone does secrets.  Well that has changed a bit, so in this update with drone 0.8.5 we’ll show how it works.  This time, we’re working on a python application.

Mac Command Line

Before we begin we need to get the drone command line tool. The easiest way to do this is to get it with homebrew

From here we need to add some environment variables to our ~/.bash_profile so that we can communicate with our drone server.  Essentially, we add:

The drone token you can get by logging into drone (see previous post) and account token in the upper left of the web interface.

Sourcing this file, we can now run something like

If this command shows your github credentials you are good to go!

Drone vs. Jenkins Level set

Drone differs from Jenkins in a few ways.  First off, the configuration file for the job is not stored in the “Jenkins Master”.  Instead, the job’s configuration file is stored in the repository with the actual code.  I love this because it makes sense that the developer own the workflow.  It’s just a paradigm shift and in my opinion much better.  It also means I could take this workflow to any drone CI/CD process and it should work fine.

The second difference is how the workflows are created.  With Jenkins you download “Plugins” and different tools and store them on the Jenkins server.  With Drone, every module is just a container.  That way they’re all the same.  I’ve written a couple as well and I really like the way its laid out.

That said, Jenkins is crusty, tried, and true and Drone is still evolving and changing.

Drone Workflow Config File

Inside the repo of the code to be tested and released we create a .drone.yml file.  Our final file (which will undergo several changes) is located here.

Let’s go over how this works.

First we specify the pipeline.  Each entry in this pipeline is an arbitrary name that helps me remember what is going on.

Notify

The first one notifies that a build is happening.  This first one looks as follows:

Since we are good Cisco employees we use Spark for notifications.  You might use slack or something you might find cooler, but we like Spark.  Notice we have a SPARK_TOKEN secret.  This secret needs to be added via the command line as we don’t want to check secrets into repositories.  That is a no-no.

Test

Next up, we want to test the python flask image using our test cases.  To do so, we created a test docker image that has all the necessary dependencies in it called kubam/python-test.  This way the container loads quick and doesn’t have to install dependencies.  The Dockerfile is in the same repo.  This step looks as follows:

You’ll notice that in each step we add the proxy environment variables.  The reason for this is that you have to remember we are behind a firewall!  So for the container to get out, it has to use the proxy configuration.

Docker

Next we want to build and publish the ‘artifact’.  In this case the artifact is a docker image.  This is done with the default drone docker plugin.

Since we are working on a branch that we want to create new images for we use the v2.0 tag to make sure we get the one we want.

Notice there are secrets here so we have to add these.  So we do:

Let it run

Now we have a great system!  Let’s let it go.  Launching it and we have it doing these three steps and publishing the docker container.  Now anytime someone pushes we’ll test it and if they all pass put it into production.  This is safe as long as we have sufficient test cases.  This is how we can go fast and make less work for everyone and be more productive.

 

CI/CD server behind a firewall

The goal of this post is to show how to create a CI/CD server that sits behind a firewall.  When we do a git push then we would like a webhook to be fired off to our CI/CD server.  The CI/CD server would then run our jobs and create our artifacts.  Normally this is pretty simple when you have a server that sits at a public IP address on the internet.  But then I’d have to pay for that.  Why pay for it when I have lots of computing resources behind my corporate firewall?

DRONE

First we set up our CI/CD server.  I’ll use drone just cause I think its cool the way it works using containers to do each step of the CI/CD pipeline.

NGROK

ngrok is recommended by Github as a way to get our private server an internet address.  Create an account by going to their homepage.

We download the app and put it on our server

After signing up we get our own authtoken with ngrok.  Run the command to get it set up

Now let’s start this server.  On the free plan you get a new domain name everytime you start it so ideally we’d like to keep this session up for good.  One way we can do that is to create a service that does this.  The other way is to create a screen session.  To be quick and pedantic we will use a screen session to make this work.

Now in this screen session let’s start our service

This now shows that our public IP address is ldb27123.ngrok.io.  Cool. We’ll use this now to set up our CI/CD server as well as Github.  You can detach from the screen using the Ctl-a-d command and the connection will stay up as long as the server is up. Notice that we told this to point to port 9001 on our server.  This is the port where we will have Drone run.

Github

We now go to our project and we will register a new application (drone in this case).  To do this on Github go to settings, developer settings , and register a new OAuth application.

On the next screen you’ll be shown the variables for client ID and client Secret.  Make note of those!

Environment Variables for CI/CD server

We’re almost ready to bring up our CI/CD server.  First we will put in environment variables into the ~/.bash_profile file of the server. We define them as follows:

 

From there you need to ‘source’ the bash_profile by running

Test that its running by doing

You should see the environment variables print out.

Install Drone

To start up our Drone service on this server you’ll need to have docker-compose installed.   We follow the instructions and created a docker-compose file that looks as follows:

Now we can start with

If you have problems, check the logs of the docker containers.  It may be that the environment variables are not set correctly.

We made it so anyone can register and will need to change this to something different so only authorized users of our org can use our CI/CD server.  You can see how to do this with more settings.  Check out the documentation on Drone’s web page.

Configure Drone

Now that you are installed you should be able to navigate to the drone web interface by going to the grok page.  You’ll be redirected to your CI/CD server and be ready to accept push events.  Next up, you’ll need to configure drone on your application to do all the wonders you like it to do!  Drone, unlike Jenkins, has a paradigm where you put the configuration of the CI/CD job into the github repo.  This works great as it allows jobs to be configured individually by the different applications. I’ll probably write about that in a different post as we explore that with our team!

Notes

  • I did find I had a few errors when putting it in production.  To start out with I was using an older docker version so I got an error that was resolved with this post.
  • Because my docker images were behind a proxy I had to configure proxy settings.  This was done in the environment variable inside the .drone.yml file. See the example here.

 

5 tools to get started developing smart contracts on Ethereum

Ethereum changes pretty quickly in terms of developing environments.  When I first started I just created my own private ethereum cluster and worked on that.  Things have gotten a lot easier.  Here’s how you get started:

  1.  Remix – This is your IDE or development environment.  I’ve been using VIM and got a solidity VIM plugin as well.  Remix is great because you can compile and troubleshoot some issues before deploying.  There is a default ballot application to get you started.
  2. Metamask – This is a browser plugin that runs in Firefox or Chrome.  It’s nice in that you can send and unlock ether and even buy from Coinbase or other places.  It also protects you from going to malicious phishing sites which I may have gone to.  Plus who doesn’t like the fox following you around while you do it.
  3. Parity – This can be used in addition or instead of Metamask.  It’s where you can keep a simple Ethereum test wallet for deploying contracts.  This runs on your development machine (my Macbook Pro).  It runs on Linux and Windows as well.  To start out with you want to deploy it on the testnet.  Start it as follows:

    It’s pretty easy to get going with that.  The snapshot looks as follows:

    The current testnet is the Kovan testnet.  If you’re on this network then to deploy contracts you’ll want test ether.  How does one go about getting test ether?  That is number…
  4. Get Test Ether (KETH) – While this may change in the future, currently you do it through gitter.im.  Go to https://gitter.im/kovan-testnet/faucet and enter your public address and they’ll send you 5 KETH.  You’ll have to login (with Github ID or something) to get going.
    Now you can get your stuff going.  Nice.
  5. MyEtherWallet – Let’s suppose you write a contract.  Then you want to interact with it and call functions, how can you do this?  One easy approach is to go to https://www.myetherwallet.com and select the test network or network you have deployed the contract to.  From there you can select ‘Contracts’, put in your contract ID.  From there you need to put in the ABI and then you can start calling the functions of the code.

 

Bonus: The Docs.  You now have all the tools you need to get started.  Now you need to know how to develop.  Try creating your own contract by following and modifying some of the examples on the solidity documentation pages.  There are some nice YouTube videos as well that help you learn solidity and help you become a solid programmer!

Hopefully armed with these tools you can go forth and create all kinds of smart contracts for 2018.  Wishing everyone a happy new year!

Sailing the seas of Crypto Currencies: Sharks and Landmines

I’ve been into Bitcoin and Crypto currencies now since late 2013.  One of the fun things I did was back in 2015 was invest $100 (at the time) of Bitcoin into a presale funding project called Augur.  The idea with Augur was to be a prediction market.  It was one of the first of what has become known as ICOs (Initial Coin Offerings) which are like IPOs but just give you tokens.  In the case of Augur you get REP which is reputation.  The project isn’t live yet, but will allow people to predict what will happen then people with REP can say what really happened and verify that outcome.  Sort of like gambling I suppose.  (Which if you are investing anything in crypto you are basically gambling, so if you take my advice, only invest what you can lose, cause you could lose it all.)

Anyway, I didn’t think much of Augur and let it just go on but then with all the hype that crypto currencies have been generating lately I thought:  What the heck, let’s see what is going on with this project.  First of all, what I realized is that everyone and their grandma is creating an ICO and really the whole basis of Ethereum is to help people create their own ICOs.  I still don’t know of a killer app that has come out of it, but maybe its cause I’m still in the dark.  My own belief is that this world is still very very early and much work has to be done before it really gets to the masses.

I had forgotten that when I had registered for Augur, it put it in an ethereum account.  I had also forgotten where that was.  So after looking around a bit, I saw that it was on myetherwallet.com and my address was

0xa0F5b92cb76372957A866660893491234485C0a5

Going to that website you can see the transactions that have since occured, but looking at the token transfers, you can see that $100 allowed me to get 158.3 Augur tokens.  Before July 11th, there were no ETH in there, just REP.

All was well and good, and then to keep more tabs on what was going on, I joined the Augur.net slack channel.  There I got a nice message that I should enable 2 Factor Authentication on my Ethereum account.  Well that is a nice reminder!  (So I thought).  So stupidly I clicked on the link and it took me to a site that looked just like myetherwallet.com, only this place was https://myetherwallet.com.de <- DO NOT GO HERE!  Phishing site!

After entering my private key and getting an error menu, I realized what I had done!  I’d been phished!  But were the tokens still there?

Going to the real site of https://myetherwallet.com I saw that they were.  What a relief!  Now I needed to generate a new wallet and move the REP over to there before the phishing nasties did it.  When I tried, I realized I didn’t have any Ethereum in the account necessary to generate the transaction.  Dang it!  So I moved some from another account into that account to fund it.  After stumbling around that for a little bit, I next moved all the REP over.  Once that was confirmed I moved the last of the ETH out of the account.

At this point the tokens seem to be safe but that was pretty stressful for a bit there!

 

Serverless Computing: How did we get to now?

This is a story of the state of where we are in the world of containers, serverless, and whatever else you want to call this mess.

The story involves 3 groups of people with their own passions, opinions, and modes of getting stuff done.  We’re getting to a point where they are no starting to see things the same way (or getting closer).  That is the real exciting part about where we are today.

The people in this story are:

  • The Infrastructure people running apps.
  • Developers writing backend enterprise and other cool things for the cloud.  SaaS developers?
  • The Mobile Developers.

This story talks about how all their paths collided and have created the jumbled mess of glory that we have today.

Infrastructure People

This group of people used to be called System Administrators back when I was a lad.  But that is so uncool.  They now call themselves Full Stack Engineers, or Site Reliability Engineers, which basically means they are system administrators that know how to write code.  Most of the good system administrators I used to work with assumed you did write gnarly bash scripts back in the day but apparently that practice was forgotten so now that is back in vogue again a job description needed to change.  We don’t want point and click administrators, we want hacker administrators that can work on our full stack.  Whatever.

In the beginning there was your data center, or place where you hosted your machines.

Then came the cloud.  And the cloud was vague.  Larry Ellison saw the cloud and said it was jibberish, made no sense, and made a chauvinistic comment about women’s fashion.  Anyway.

The analysts came, and said the cloud was actually 3 things:  IaaS, PaaS, and SaaS.  With IaaS, you did everything but the hardware and with SaaS you just consumed the software.  PaaS was a strange beast in the middle that was never really defined.  People would just say:  “You know, like Heroku, or Beanstalk”.  They were people with opinions telling you they could take your IaaS to the next level.  But it was still weird, and no matter what you can say, it was vague.  Sure, NIST got involved and cleared all these definitions up, but there’s still a lot of wiggle room into what a PaaS was.

Then came Docker in 2013.  Docker technology wasn’t new.  It was just nicely packaged.  With a cute friendly whale.  Docker actually started as a T-Shirt company, but then took Linux namespaces, cgroups, and a union filesystem and made it fun to work with.

But people said:  It’s hard to manage all these containers.  Cause if I have on container on a VM, no problem. But what if I have 4?  And multiple servers running multiple containers.  Container sprawl! Port sprawl!  Agh!

So other people said:  We will take care of this problem!  So they cobbled existing open source tools together to run it. You can use Mesos, Marathon, Consul, etc… Ugh..

Meanwhile Docker said:  Hey, we’re still here!  We created swarm to run it on multiple nodes.  Hurray for Docker!

Then in 2014 Google says:  You know, we’ve actually been running containers since forever and we know how to do it pretty well.  Anyway, we noticed that we’re having a hard time getting people to notice our superior cloud platform.  It is better than Amazon’s in every way except you’re not as smart enough to see why.  Typical you.  Anyway, here is something called Kubernetes that will hopefully get people to notice our cloud, it is our gift… sort of.

And Kubernetes is awesome.  And people were like:  Wow, this is how we can all manage containers.  So people jumped on that.  A community was born!  People complained:  Docker is too restrictive!  It won’t accept my pull request!  It’s too monolithic.  Whatever.  Poor Docker.

So then the old PaaS vendors with their opinions changed and produced another opinion:  You know, Kubernetes is a project, so if you want to run it the best way, run it on our project.  And so they started having opinions about Kubernetes.  And OpenShift, Cloud Foundry, Apprenda, Tectonic, Rancher, etc all offered this to you for a reasonable price and a chance to feel like you were one of the cool kids.

Meanwhile Docker said:  We have our own product called Docker Datacenter.  And oh.. Kubernetes does that?  Ok, we’ll add that in.  And we are also very secure!

So that’s where the infrastructure people are at right now.  PaaS is basically container stuff.  Nothing else really matters.

Developers

These cool companies that had been around for a while that were perhaps “Born in the Cloud” started saying:  You know, this IaaS stuff is working pretty cool for us, but we have jobs that do other things.  You know, solving the real problems that plague society like Silicon Valley is known to do today:  How can I spy on my old girlfriend from High School?  How can I tell people that I’m having a rough day?  How can I exploit Taxi drivers and then replace them with machines some day?  These are the issues people.

So let’s imagine that someone uploads a photo to our super amazing site that let’s you share photos “with people that you care about”.  They don’t want to maintain VMs for this.  They want something like PaaS but they don’t want to manage containers.  So Amazon says in November 2014:  Hey:  We have this thing called Lambda and it just executes functions in responses to events.  So if you upload a photo, it will call this function.  We’ll package your function and run it on a container and we’ll manage it all for you. Magical!

Pause here and let us all praise Amazon:  Oh AWS, you are so magical, so innovative, so insanely focused on your customers!  How shall we praise thee?  Selah!

Developers love it cause now we can write entire applications without creating virtual machines!  Wahoo!  We’ve finally freed ourselves from the shackles of the operating system!  No more patching.  Its all the responsibility of the cloud providers.

Cloud providers are happy to provide it to you because now you will use more of their services (we’ve got you trapped!) and free up our VMs that you customers were idling waisting away anyway.

Mobile Developers

But it turns out that AWS wasn’t really listening to their customers as fast as you would think.  It was only until some other threats started to emerge.  Kubdos to AWS for being aware of the threats.  Some companies I’ve worked for haven’t been as astute.  Back when we started mobile development a two-person shop would start working on the next killer mobile app.  This was about 2008 and the mobile developers would spend time working on the front end and making it all work awesome.  But then they started to realize:  Hey, we could do a lot more cool things (like track our customers and steal their privacy) if we could upload this app information to a cloud platform.  But back in those days they couldn’t afford a system administrator (oh sorry, full stack reliability engineer) so what were they to do?

Two great companies were formed to solve this problem and others have emerged.  Parse.com and Firebase were created in 2011.  Parse was bought by Facebook and Firebase by Google.  These companies offered a dashboard to mobile app developers that basically offered SaaS to developers.  These services back then were called BackEnd as a Services.  And what more is serverless than creating an application that runs in the cloud?  Function as a service is just the glue that combines the other elements of the backend.  So in a way, the mobile app developers created serverless.  Right?

 

Today

Where AWS and now others have it right is that those serverless systems can go faster because they use containers underneath.  You see, Serverless is the combination of all the developments of these different players.  Their needs, passions, and desires, all being fulfilled and packaged in a grand thing called Serverless.

Serverless today has a couple of characteristics that make it great:

  • You don’t have to manage operating systems (like IaaS, or Container as a Services (the new PaaS?))
  • You pay by the transactions instead of by the hour like IaaS
  • You buy into a bigger ecosystem of applications that are written for app developers:  A database, an identity service, a notification service, an object storage service… Function as a service is a way to tie those together.

The cost is cheaper for everyone, the velocity is vigorous, and the enjoyment is beyond euphoria.

 

Getting around Terraform Indexing

Terraform has some good ways to do interpolation that can be simple.  However, I think as I’ve tried to make Terraform do more of what Ansible does, which might be outside its scope, I run into issues.

One issue I had was creating a list of all the servers in the cluster that I wanted to define static routes for.  To do this I used Terraform’s formatlist built in function.  The issue I had was that I couldn’t get a good index on it.  I searched and found this issue.  Seems I’m not the only one with an issue!  So that is comforting.  So I fought with it for a few minutes then finally went to bed exhausted.  This morning I woke up renewed and thought of a great plan!  I love how your mind works while you sleep.

I defined my compute node with a metadata to keep the count!

Notice in the metadata section above where I give each node a worker_number, which just corresponds to the count.

Later, where I’m going through and creating a template with a list of all the servers I used this bad boy variable to give me the iteration number:

This works great and gives me the count that I need!

Clojurescript (CLJS) – ReactJS, Re-Frame, and Reagent

I’ve been using Re-frame pretty extensively these last few weeks as we begin to work on the front end of our horrendously stylized project called Pipeline.  It has been pretty tough to get the hang of but after being stubborn enough not to change my mind once going down this road I’m really starting to come around … and dare I say, actually understand how to make it work!

What’s the Big Idea?

There are a few components to what I’m talking about that I’ll go over below.  The big idea though is that web applications today are a big mess of javascript, html, and CSS that is somehow stitched together.  Most of todays web applications aspire to be Single Page Applications (or SPA if you are into that whole brevity thing).  Several frameworks have emerged with the (arguably) popular ones being Angular.js (from Google) and React.js (from Facebook).  The big picture though is:  How do you create a great Web front end to your application?  These frameworks typically call the APIs of a web backend which (hopefully) would be the same backend that a mobile application would call.

Clojurescript

Clojurescript is a LISP language.  It’s basically the same language/syntax as Clojure but where as Clojure compiles down to Java, Clojurescript compiles down to Javascript.  Why would you want that?  Seems like overhead.  Well, the claim is that the Google closure compiler (confusing name) is much more efficient and can make it faster.

What is interesting to me as I go down this route is that most of the blog posts are from 2014 and earlier and I can’t tell if I’m pursuing a path that is no longer cool or what.  This is surprising to me because with the greater popularity of React these days this seems like a great path.

The downside to clojurescript is the stack traces are difficult to comb through and understanding the whole immutability thing can be a struggle.

React

React is Facebook’s Javascript library for creating user interfaces.  The idea is that there is a virtual dom and it listens to changes in data (binds to data) then will respond immediately to the changes.  It’s funny that after using the library through Reagent I still don’t really understand React.  To me its just the data changes and the web page is immediately updated.

It should be noted that React isn’t a full on featured library like Angular so some people say its unfair to compare them.  But I tend to like this blog’s argument of why you can.  And in my limited knowledge of things in the world am easily persuaded by his post.  (Or should I say, enjoy that he has validated my decision).  I especially like the “unix philosophy” argument that takes the position of: “Your tool should do one thing very well”.

Reagent

So you have React and you have Clojurescript and you want to make awesome happen.  So naturally people create frameworks to make this happen for you.  Om, Rum, and some others emerge and I think are still out there.  I settled on Reagent though cause it seemed pretty simple and easier for me to wrap my head around.  Reagent uses the hiccup syntax so it gets rid of having to do html tags and makes it easier to see.  When I used Ruby on Rails I would use HAML which was the same concept.

The result is that now when you are writing the application everything is in one language!  There’s no sprinking of HTML and not mixing with Javascript, its all just Clojurescript!  This is the thing I most love about it.  Its more akin to me like designing an iOS application in swift where there is one language to do it all.

The other powerful aspect of Reagent is that it is functional.  Functional Reactive Programming (or FRP) means that interfaces respond to the data.  Or, “everything is data derived”.  There’s no ngThis or ngThat, it’s just functional stuff.

The issue with Reagent (or lack of design feature) is that it is silent on how to store data and interact with data (client side) as well as serverside.  This is where the grand daddy of them all comes into play:  Re-Frame

Re-Frame

I started with examining the README file on the Re-Frame project and was immediately blown away.  Yes!  This is the most epic readme of all projects.  The examples and docs were pretty clear and quite glorious.  I loved the opinionated nature of it and the humor.  (not the best reason to pick a framework, but not bad!)

Re-frame is a framework that works with Reagent to create the idea of a clientside database.  (Or a database that lives in the browswer).  It makes things fast and helps with state changes.  It has the idea of subscribing to changes or dispatching changes to the database realizing that everything can be asyncronous but everything responds to changes in the data.  Data all the way down.

An Example

I put an example on github that you can see if you want to run for yourself.

The idea is that you have a list of names and you want to filter them in the search bar.  Here’s the main entry code that is called from the main reagent code that initializes this home page.

I have several bootstrap tags in there that I took out of the code.  The tricky thing is to notice the reagent r/atom which is the data that changes.  This code doesn’t have any re-frame code to it.  Basically it displays a form.  Notice the code at the bottom to call display-names.  That code is where the filtering happens.  When the input is changed, the search-string changes.  You never have to re-call display names as its always called and responds to changes in data.  So great!

Above is the display names code that takes a list of names (defined outside the function in the def (or variable ) called “names”.  It goes through and prints a name if it matches the filter.