Why Docker Changes everything

There are many shifts we talk about in IT.  Here are 2 recent examples come to mind that most people are familiar with:

  • The shift from physical to virtual
  • The shift from on-prem infrastructure to cloud based consumption IT.

Ultimately it is an organization with a clear vision that disrupts the status quo and creates a more efficient way of doing things.  Those 2 examples were the result of the brilliant people who started VMware and Amazon Web Services (AWS)

VMware was incredible and made it so easy.  How many vMotion demos did you see?  They nailed it and their execution was outstanding.  And so vCenter with ESX still stands as their greatest achievement.  But unfortunately, nothing they’ve done since has really mattered.  Certainly they are pounding the drum on NSX, but the implications and the payoffs are nothing like an organization could get from when they went from p2v (physical to virtual).   VCAC or vRealize, or whatever they decided to rename it this quarter is not catching on.  And vCheese, or vCHS, or vCloud Air (cause ‘Air’ worked for Apple) isn’t all that either.  I don’t have any customers using it.  I will admit, I’m pretty excited about VSAN, but until the get deduplication, its just not that hot.  And solutions from providers like SimpliVity have more capabilities.   But there can be no doubt that vSphere/ESX is their greatest success.

AWS disrupted everybody.  They basically killed the poor hardware salesman that worked in the SMB space.   AWS remains far in the leadership quadrant.

My customers, even the ones I could argue that are not on the forefront of technology, are migrating to AWS and Azure.  Lately, I’ve been reading more and more about Digital Ocean.  I remember at IBM back in 2003 hearing about Utility Computing.  We even opened an OnDemand Center, where customers could rent compute cycles.  If we had the clear vision and leadership that AWS had, we could have made it huge at that time.  (Did I just imply that IBM leadership missed a huge opportunity?  I sure did.)

Well today we have another company that has emerged on the forefront of technology that is showing all others the way and will soon cause massive disruption.  That company is Docker.  Docker will soon be found on every public and private cloud.  And the implications are huge.

What are Containers?

I am so excited about Docker I can barely contain myself.  Get it?  Contain.. Docker is a container?  No?  Ok, let me explain Docker.  I was working for IBM around 2005 and we were visiting UCS talking about xCAT and supercomputers and the admins there started telling us about Linux containers.  or LXCs.  It was based on the idea of a Union File System.  Basically, you could overlay files on top of each other in layers.  So let’s say your base operating system had a file called /tmp/iamafile with the contents “Hi, I’m a file”.  You could create a container (which I have heard explained as a chroot environment on steroids, cause its basically mapped over the same root).  In this container, you could open the file /tmp/iamafile and change the contents to “I am also a file modified”.  Now that file will get a Copy-on-Write.  Meaning, only the container will see the change.  It will also save the change.  But the basic underlying file on the root operating system sees no change.  Its still the same file that says “Hi, I’m a file”.  Only the instance in the container has changed.

Its not just files that can be contained in the container.  Its also processes.  So you can run a process in the container and run it in its own environment.

That technology, while super cool, seemed to be relegated to the cute-things-really-geeky-people-do category.  I dismissed it and never thought about it again until I saw Docker was using it.

Why Docker made Containers Cool

Here are the things Docker gave that made it so cool, and why it will disrupt many businesses:

1.  They created a build process to create containers.

How do most people manage VMs?  Well, they make this golden image of a server.  They put all the right things in it.  Some more advanced people will script it, but more often than not, I just see some blessed black box.

Our blessed black box

We don’t know how this image came to be, or what’s in it.  But it’s blessed.  It has all our security patches, configuration files, etc.  Hopefully that guy who knows how to build it all doesn’t quit.

This is not reproducible.  The cool kids then say: “We’ll use Puppet because it can script everything for us”  Then they talk about how they know this cool puppet file language that no one else knows and feel like puppet masters.  So they build configuration management around the image.

With Docker this is not necessary.  I just create a Dockerfile that has all the things in it a puppet script or chef recipe had in it and I’m done.  But not only that, I can see how the image was created.  I have a blueprint that syntactically is super easy.  There are only 12 syntax lines.  Everything else is just how you would build the code.

We also don’t try to abstract things like Puppet and Chef do.  We just say we want the container to install nginx and it does by doing:

RUN apt-get -y install nginx

That’s it.  Super easy to tell what’s going on.  Then you just build your image.

2. Containers are Modular

The build process with Docker is incredible.  When I first looked at it, I thought: Great, my Dockerfile is going to look like a giant monolithic kickstart postscript that I used to make when installing Red Hat servers.

But that’s not how it works.  Containers build off of each other.  For my application I’m working on now, I have an NGINX container, a ruby container, an app server container, and my application container sitting on that.  Its almost like having modules.  The modules just sit on top of each other.  Turtles all the way down.

This way I can mix and match different containers.  I might want to reuse my NGINX container with my python app.

3.  Containers are in the Cloud: Docker Hub

What github did for code, docker has done for system management.  I can browse how other people build nginx servers.  I can see what they do when I look at Dockerhub.  I can even use their images.  But while that’s cool and all, the most important aspect?  I can download those containers and run them anywhere.

You have an AWS account?  You can just grab a docker image and run it there.  Want to run it on Digital Ocean?  OpenStack? VMware?  Just download the docker image.  It’s almost like we put your VM templates in the cloud and can pull them anywhere.

What this gives us is app portability.  Amazing app portability.  All I need is a generic Ubuntu 14.04 server and I can get my application running on it faster than anyway I’ve been able to do before.

How many different kind of instances would I need in AWS?  How many templates in my vCenter cluster?  Really, just one:  One that is ready for docker to run.

Who Gets Disrupted?

So now that we have extreme portability we start to see how this new model can disrupt all kinds of “value added” technologies that other companies have tried to fill the void on.

Configuration tools – Puppet, Chef, I don’t need to learn you anymore.  I don’t need your agents asking to update and checking in anymore.  Sure, there are some corner cases, but for the most part, I’ll probably just stick with a push method tool like Ansible that can configure just a few commands to get my server ready for Docker.  The rest of the configuration is done in the Dockerfile.

Deployment tools  – I used Capistrano for deploying apps on a server.  I don’t need to deploy you any more.  Docker images do that for me.

VMware – Look at this blog post from VMware and then come back and tell me why Docker needs VMware?  Chris Wolf tries to explain how the vCloud Suite can extend the values of containers.  None of those seem to be much value to me.  Just as VMware has tried to commoditize the infrastructure, Docker is now commoditizing the virtual infrastructure.

AWS – Basically any cloud provider is going to get commoditized.  The only way AWS or Google App Engine or Azure can get you to stick around is by offering low prices OR get you hooked on their APIs and services.  So if you are using DynamoDB, you can’t get that anywhere but AWS, so you are hooked.  But if I write my container that has that capability, I can move it to any cloud I want.  This means its up to the cloud providers to innovate.  It will be interesting to hear what more AWS says about containers at Re:Invent next week.

Who Can Win?

Docker is already going to win.  But more than Docker, I  think Cisco has potential here.  Cisco wants to create the Intercloud.  What a better way to transport workloads through the inter cloud than with docker containers.  Here is where Cisco needs to execute:

1.  They have to figure out networking with Docker.   Check out what Cisco has done with the 1000v on KVM.  It works now, today with containers.  But there’s more that needs to be done.  A more central user friendly way perhaps?  A GUI?

2.  They have to integrate it into Intercloud Director and somehow get the right hooks to make it even easier.   Inter cloud Director is off to a pretty good start.  It actually can transport VMs from your private data center to AWS, Azure, and DimentionData clouds, but it has a ways more to go.  If we had better visibility into the network utilization of our containers and VMs both onprem and offprem, we would really have a great solution.

What about Windows?

Fine, you say.  All good, but our applications run on Microsoft Servers.  So this won’t work for us.  Well, you probably missed this announcement.  So yeah, coming soon, to a server near you.

Conclusion

Its 2014.  Docker is about a year old.  I think we’re going to hear lots more about it.  Want another reason why its great?  Its open source! So start playing with Docker now.  You’ll be glad you got ahead of the curve.

 

Comments are closed.