Rails API Testing or “You’ve been doing it wrong”

For the last several years I’ve been developing Ruby on Rails applications.  Nothing fancy, and most of them just failed projects that didn’t go anywhere.  During all this time I’ve trolled documentation on testing the app, saying “That would be nice to do one day”, but I’ve finally had enough and I want to change my evil ways. I’m going to start writing test cases with my applications.

To start with, I’m testing my API.  I didn’t even know where to start.  I watched a Railscasts episode where the legendary Ryan Bates talks about how he tests.  There were some great comments in there about all these additional plugins.  It seemed pretty overwhelming to me at first and there was quite a learning curve.  So I wanted to write some of this down in case there was someone out there like me who was going to start down the path of correcting the error of there ways before 2015 hits.

Basic Rails Testing

The first thing to check was the guide.  Rails has great documentation!

I ran:

Just to see if I would get any output.  After all, Ruby generates this for you automatically right?  Low and behold I got a failure!

Well, first I was missing the test environment, so I put that in the config/database.yml.  Easy problem to solve!

Next, my welcome controller gave me errors:

Ok, I already like this.  Looks like I need to intiate my test database?

Turns out that Devise is the culprit.  A quick trip to StackOverflow solves the problem.  I fixed it like they said. Then I ran into another one with Devise that I added to my test/test_helper.rb file.   Its so nice to go over roads that other people have traveled!

Running my test:

And I finally got tests to work!  Yay!  It looks like I’m doing it right.

From here, I looked to test other API calls, but all the documentation said that I should probably start looking at rspec.  Apparently, that’s how the cool kids are doing it.  (Or were doing it at some point when they wrote how to do it).  So after running rake test, that was the last testing I did with the distributed rails testing.

RSpec

This is the latest hotness in testing that I could find in my research.  Pretty much everybody seems to be using it.  I edited my Gemfile and added rspec-rails.  I also finally started grouping things so I wouldn’t install these unnecessary gems on my production servers.  Spoiler alert:  My completed Gemfile looks like the below:

As you can see, I added a few more gems after rspec-rails, but I’ll get into those in a second.

After doing bundle install I ran:

Now to test we run

bundle exec rspec

Ok, no tests to do yet!  Now to get to work!

Factory Girl & FFaker and other setup

The next step was to put Factory Girl.  Once again, Ryan Bates explains why Factory Girl is preferred over Fixtures.  I went back and added that to my Gemfile along with ffaker because I saw some cool things in that gem.  (The one thing not cool about ffaker was the documentation, but the code was easy enough to read.

Next, I modified config/application as specified in this blog entry.

I also had to add these modules into the rest of the environment.  I changed the spec/rails_helper.rb to have the below.  Everything else stayed the same:

Then I added the directory:

mkdir spec/support

I added the file  spec/support/devise.rb

as well as the file  spec/support/factory_girl.rb

That has all my extra libraries used for my tests.

Lastly, I setup the test database

rake db:test:prepare

A basic Tests

Now to set up some tests.  I thought it best to start off simple with a static page:

rails g rspec:controller welcome

This is the root of the homepage.  Following the documentation, I added some simple tests for the welcome page:

I ran bundle exec rspec  and it worked.  (Though not at first, as I had to figure out how to configure everything like I set up above. )

Testing User Model

rails generate rspec:model user

Since I already have a user model.  The list of spec modules to add are listed here.

Since we have a model we are testing, we need to generate the fixture for it.  Here’s how I made it work with my Devise implementation:

spec/factories/user.rb

The part that was most important that stumped me for a while was not putting the { } around Faker::Internet.email.  Since my tests tests for unique emails, it kept failing.  Putting the {} around Faker::Internet.email made sure it was unique on each call.

There’s a lot of documentation on cleaning up the database by using the database_cleaner gem.  I’m not using it right now.  ffaker generates all kinds of new things for me, so I don’t worry about it.  I suppose that the database would need to be initialized though from time to time.

This could be accomplished with:

Next I added the user model test

spec/models/user_spec.rb

This uses the shoulda-matches gem quite heavily and seems to be a good start to testing my user model.  Unit test check!

 Testing the API Controller

Next, I wanted to check the API for when users authenticate.  The way my API works (and the way I assume most work this way) is that the user will send a username (or email) and password and from that the application will send back an API token.  This makes subsequent calls stateless.  So I’ll test my session login  controller:

rails g rspec:controller api/v1/sessions

 I was happy to see it created  spec/controllers/api/v1/sessions_controller_spec.rb  just like how I have my API!

Here’s the first version of my working sessions_controller_spec.rb file:

Running this test:

Wow!  I feel like a real hipster programmer now!  Testing!

More Tests

This is just the beginning.  I am now a convert and subscribe to the theory that you should spend about half of your time writing test cases.  It pays off in the long run.  I have more to go, but from now on with each line of code I write I’ll be writing test cases.  It seems that I still have a lot of catching up to do with the current system.

 

 

 

Boot2Docker with Cisco AnyConnect

Boot2Docker is an OS X app used to create a virtual environment for docker.  Docker only runs on Linux, so Boot2Docker installs a VM on your mac (using virtual box) and a client that runs locally to communicate with the VM.

I downloaded this and followed instructions.  You basically just install it with a few clicks.  Once installed, boot2docker will be in your application folder.  You click on it and it in the applications folder and you are ready to go.  It kicks off its own terminal window.  Since I use iTerm2, I just start it like so:

boot2docker up

This will give you a few environment variables to export:

This starts up a VM and Docker daemon that can be used to work with docker.

Once this was up, I ran: docker run hello-world . This gave me a friendly message that everything was up. So, following its suggestion, I ran docker run -it --rm ubuntu bash . This took a bit longer to finish as it had to download the ubuntu image.  Subsequent launches take less than a second.

There is another project called KiteMatic I dabbled with, but was happy enough with Boot2Docker that I didn’t bother pursuing  it.

Cisco AnyConnect VPN problem:

There is an issue with using boot2docker and Cisco AnyConnect VPN.  Basically its this:  You can’t run any docker commands because AnyConnect doesn’t allow any split tunneling.

What’s worse, is that after terminating a VPC session with AnyConnect (disconnecting), I have to reestablish a static route so that I can talk to boot2docker again:

To get around this the fix is to route your docker calls through your localhost.  That way, regardless of whether you are connected to the VPN or on an island somewhere (or both) you can still connect.

1. Start from scratch

boot2docker delete

2.  Create new boot2docker image

boot2docker init

3.  Edit VirtualBox and edit settings for NAT.

Screen Shot 2014-12-12 at 11.05.41 AM

Select ‘Port Forwarding’

4.  Add the Docker port forwarding.

Screen Shot 2014-12-12 at 11.08.04 AM

Click ok and exit VirtualBox.

5. Start up the Docker VM

 6.  Export localhost:

 7.  Drawbacks and Caveats

Now you have exposed Docker to the world.  For any service that you put on there, like when you launch docker -p 80:80, you’ll have to go into virtual box and map 80 to 80 so that it shows up.  Not the greatest solution, but at least it works!

Credits: boot2docker github tracker @jchauncey and @nickmarden. Thanks guys!!!

 

iOS icon size generator

One of the things you run into while developing iOS applications is that you need several different icon sizes for the various devices.  For an iOS 8 iPhone application, you need at least 4 different sizes: 58pt, 80pt, 120pt, and 180pt.  Not to mention the main icon size for the app store.  If you develop a Universal App for the iPad there’s even more!

I’m sure there are tons of things people use to do this but I thought I’d throw my own solution in the mix as well.

I use ImageMagick with a python script I wrote.  I like it because its quick and easy.  Plus, if you are developing in Ruby on Rails and need something like CarrierWave to upload images, you’ll already have ImageMagick installed.

Here’s how it works:

1.  Install ImageMagick

brew install imagemagick

2.  Use my python script

 Run the Python Script

With your original sized icon in the same directory as this script run the script.  Here’s my output example.  The file I put in the directory is called icon2@1024.png.

As you can see, it generated all the image sizes you would need for your asset images.

 

 

Cisco, Cluster on Die, and the E5-2600v3

I was in a meeting where someone noted that VMware does not support the Cluster on Die feature of the new Intel Haswell E5-2600v3 processors.  The question came: “What performance degradation do we get by not having this supported?  And would it be better to instead use the M3 servers”

This information they got from Cisco’s whitepaper on recommended BIOS settings for the new B200 M4, C240 M4, and C220 M4 servers.

The snarky answer is: You’re asking the wrong question.  The nice answer is: “M4 will give better performance than M3″.

Let’s understand a few things.

1.  Cluster on Die (CoD) is a new feature.  It wasn’t in the previous v2 processors.  So assuming all things equal, running the v3 without it just means that you don’t get whatever increased ‘goodness’ it will provide.

2.  Understand what CoD does.   This article does a good job pointing out what it does.  Look at it this way:  As each socket gets more and more cores in each iteration of Intel’s new chips, you have a lot of cores looking into the same memory bank.  It starts making it so latency goes up to find the correct data.  CoD carves up the regions so data to the cores stay somewhat more coherent.  Almost like giving each core its own bank of cache and memory.  This helps the latency go down as more cores are added.

To sum it up:  If I have a C240 M3 with 12 cores per processor and I compare it to a C240 M4 with 12 cores per processor with CoD disabled, then I still have the same memory contention problem with the M4 cores that I did with the M3s.  When VMware eventually supports CoD, then you just get a speed enhancement.

 

Unauthorized UCS Invicta usage

I had a UCS Invicta that I needed to change from a scaling appliance (where more than one invicta node is managed by several others) to a stand alone appliance.  I thought this could be done in the field but at this point I don’t think its an option still.  The below info is just some things I tried in a lab and I’m pretty sure are not supported by Cisco.

While the UCS Invicta appliance has had its share of negative press and setbacks, I still have pretty high hopes for this flash array.  Primarily because the operating system is so well designed for solid state media.  Its different, full featured, and I think offers a lot of value for anyone in need of fast storage. (everyone?)

First step was to take out the Infiniband card and put in the fibre channel cards.  I put it in the exact spot the IB card was in.

putting the FC card in the Invicta

CIMC

I had to change IP address.  Booted up.  Pressed F8, changed the IP address to 10.1.1.36/24 and didn’t touch the password.

The default CIMC password to login was admin / d3m0n0w! (source)

Console

It takes forever to boot up because there is no InfiniBand card in the appliance anymore.  I took it out and replaced with a fibre channel HBA that came in one of the SSRs that managed the nodes.

The default console user/password is  console / cisco1 (source)

This didn’t work for me I thought because whoever had the system before me changed it.  Fortunately for me, this is Linux.  Rebooted the system and added ‘single’ to the grub menu.   Yes, I securely compromised the system.  Such is the power of the Linux user.

IMG_6674

Once on the command line I entered the passwd command to change the root password into something I could log in with.  I took a look at the /etc/passwd table and noticed that there was no console user.  Hmm, guess that’s why I couldn’t log in.

I then rebooted.

Reformatting

There were three commands that looked pretty promising.  After digging around I found:

/usr/local/tools/invicta/factoryreset/

/usr/local/tools/invicta/menu  and the accompanying   menu.sh  command.

and

/usr/local/tools/invicta/support/secureWipe.sh 

I tried all these and then rebooted the system.  Nothing seemed to indicate any progress as to turning this node into a stand alone appliance.  I have a few emails out and if I figure it out, I’ll post an update. 

SSH tunnel for Adium Accounts

Sometimes you’ll be at a place where your IRC client can’t connect to something like irc.freenode.net.  This is where ‘behind corporate firewalls’ turns into ‘behind enemy lines’.  IRC is invaluable because there are people who can help you with your problems… but they are on the outside.

Here is how I solve that for a Mac OS running Adium as my IRC client.

First, you need a place on the internet you can ssh into.  If you have an AWS machine, that can work.  You run the following command:

ssh -D 127.0.0.1:3128 foo@some-server.com

This opens up the proxy connection.  Next on Adium, when you look at the accounts, you can input the following:

Adium proxy settings for SSH

Doing this allows me to see things on the #docker, #xcat IRC channels.

Ansible: From OSX to AWS

My goal in this post is to go from 0 to Ansible installed on my Mac and then be able to provision AWS instances ready to run Docker containers.  The code for this post is public on my github account.

OSX Setup

I am running OS X Yosemite.  I use brew to make things easy.  Install homebrew first.  This makes it easy to install Ansible:

brew install ansible

I have one machine up at AWS right now.  So let’s test talking to it.  First, we create the hosts file:

Now we put in our host:

instance1

I can do it this way because I have a file ~/.ssh/config  that looks like the following:

Now we can test:

Where to go next? I downloaded the free PDF by @lhochstein that has 3 chapters that goes over the nuts and bolts of Ansible so I was ready to bake an image.  But first let’s look at how Ansible is installed on OS X:

The default hosts file is, as we already saw, in /usr/local/etc/ansible/hosts .  We also have a config file we can create in ~/.ansible.cfg .  More on that later.

The other thing we have is the default modules that shipped with Ansible.  These are located in /usr/local/Cellar/ansible/1.7.2/share/ansible/  (if you’re running my same version)

If you look in this directory and subdirectories you’ll all the modules that Ansible comes with.  I think all of these modules have documentation in the code, but the easiest way to read the documentation is to run

ansible-doc <module-name>

Since we need to provision instances then we can look at the ec2 module:

This gives us a lot of information on modules you can use to deploy ec2 instances.

An nginx Playbook

Let’s take a step back and do something simple like deploy nginx on our instance using an Ansible Playbook.

I create an ansible file called ~/Code/ansible/nginx.yml .  The contents are the following:

I then created the file  ~/Code/ansible/files/nginx.conf

Finally, I created the ~/Code/ansible/files/index.html

With this I run the command:

ansible-playbook nginx.yml
If you are lucky, you have cowsay installed.  If so, then you get the cow telling you what’s happening.  If not, then you can install it:
brew install cowsay
Now, navigate to the IP address of the instance, and magic!  You have a web server configured by Ansible.  You can already see how useful this is!  Now, configuring a web server on an AWS instance is not todays hotness.  The real hotness is creating a docker container that runs a web server.  So we can just tell Ansible to install docker.  From there, we would just install our docker containers and run them.

A Docker Playbook

In one of my previous blog entries, I showed the steps I took to get docker running on an Ubuntu image.  Let’s take those steps and put them in an Ansible playbook:

Here we use some of the built in modules from Ansible that deal with package management.   You can see the descriptions and what’s available by reading Ansible’s documentation.

We run this on our host:

ansible-playbook -vvv docker.yml 

And now we can ssh into the host and launch a container:

sudo docker run --rm -i -t ubuntu /bin/bash

This to me is the ultimate way to automate our infrastructure:  We use Ansible to create our instances.  We use Ansible to set up the environment for docker, then we use Ansible to deploy our containers.

All the work for our specific application settings is done with the Dockerfile.

Provisioning AWS Machines

Up until now, all of our host information has been done with one host: instance1 that we configured in our hosts file.  Ansible is much more powerful than that.   We’re going to modify our ~/.ansible.cfg  file to point to a different place for hosts:

This uses my AWS keypair for logging into the remote servers I’m going to create.  I now need to create the inventory directory:

mkdir ~/Code/Ansible/inventory

Inside this directory I’m going to put a script: ec2.py.  This script comes with Ansible but the one that came with my distribution didn’t work.


The ec2.py file also expects an accompanying ec2.ini file:

You can modify this to suit your environment.  I’m also assuming you have boto installed already and a ~/.boto file.  If not, see how I created mine here.

Let’s see if we can now talk to our hosts:

ansible all -a date
Hopefully you got something back that looked like a date and not an error.   The nodes returned from this list will all be in the ec2 group.  I think there is a way to use tags to further make them distinct, but I haven’t had a chance to do that yet.
We now need to lay our directory structure out for something a little bigger.  The best practices for this is listed here.  My project is a little more simple as I only have my ec2 hosts and I’m just playing with them.  This stuff can get serious.  You can explore how I lay out my directories and files by viewing my github repository.
The most interesting file of the new layout is my ~/Code/Ansible/roles/ec2/tasks/main.yml file.  This file looks like the below:

I use a variable file that has these variables in the {{ something }} defined.  Again, check out my  github repo.   This file provisions a machine (similar to the configuration from my python post I did) and then waits for SSH to come up.

In my root directory I have a file called site.yml that tells the instances to come up and then go configure the instances.  Can you see how magic this is?

we run:

ansible-playbook site.yml

This makes Ansible go and deploy one ec2 host.  It waits for it to become available, and then it ssh’s into the instance and sets up docker.  Our next step would be to create a few docker playbooks to run our applications.  Then we can completely create our production environment.

One step closer to automating all the things!

If you found errors, corrections, or just enjoyed the article, I’d love to hear from you: @vallard.

 

Remove an EC2 host with Ansible

I spent forever trying to understand the built in Ansible ec2 modules.  What I was trying to figure out seemed simple:  How do you delete all your ec2 instances?  Nothing too clever.  Turns out it was easier to create the instances than delete it (for me anyway).  So I’m writing this down here so I remember.

I’ve also created an Ansible playbook github repository where I’ll be putting all the stuff I use.   I also plan on doing a post where I show how my environment was set up.

For now, here is the playbook:

The tricky part for me was trying to get the data from the ec2_facts module into the rest of the playbook.  It turns out that ec2_facts loads the data into the hostvars[inventory_hostname] variable.  I was looking for instance_id as the variable, but it didn’t populate this.  This may be that I have an older version of Ansible (1.2.7) that comes installed with homebrew on the mac.

If you find this ec2_id variable doesn’t work for getting the instance ID of the AWS instance, then take a look at the debug statements to see what is populated in the hostsvars[inventory_hostname]

I should also point out that the host group that it uses above ‘ec2′ is from the ec2.py executable that comes with Ansible that I put in my inventory directory.

More on that in a coming post.

Why Docker Changes everything

There are many shifts we talk about in IT.  Here are 2 recent examples come to mind that most people are familiar with:

  • The shift from physical to virtual
  • The shift from on-prem infrastructure to cloud based consumption IT.

Ultimately it is an organization with a clear vision that disrupts the status quo and creates a more efficient way of doing things.  Those 2 examples were the result of the brilliant people who started VMware and Amazon Web Services (AWS)

VMware was incredible and made it so easy.  How many vMotion demos did you see?  They nailed it and their execution was outstanding.  And so vCenter with ESX still stands as their greatest achievement.  But unfortunately, nothing they’ve done since has really mattered.  Certainly they are pounding the drum on NSX, but the implications and the payoffs are nothing like an organization could get from when they went from p2v (physical to virtual).   VCAC or vRealize, or whatever they decided to rename it this quarter is not catching on.  And vCheese, or vCHS, or vCloud Air (cause ‘Air’ worked for Apple) isn’t all that either.  I don’t have any customers using it.  I will admit, I’m pretty excited about VSAN, but until the get deduplication, its just not that hot.  And solutions from providers like SimpliVity have more capabilities.   But there can be no doubt that vSphere/ESX is their greatest success.

AWS disrupted everybody.  They basically killed the poor hardware salesman that worked in the SMB space.   AWS remains far in the leadership quadrant.

My customers, even the ones I could argue that are not on the forefront of technology, are migrating to AWS and Azure.  Lately, I’ve been reading more and more about Digital Ocean.  I remember at IBM back in 2003 hearing about Utility Computing.  We even opened an OnDemand Center, where customers could rent compute cycles.  If we had the clear vision and leadership that AWS had, we could have made it huge at that time.  (Did I just imply that IBM leadership missed a huge opportunity?  I sure did.)

Well today we have another company that has emerged on the forefront of technology that is showing all others the way and will soon cause massive disruption.  That company is Docker.  Docker will soon be found on every public and private cloud.  And the implications are huge.

What are Containers?

I am so excited about Docker I can barely contain myself.  Get it?  Contain.. Docker is a container?  No?  Ok, let me explain Docker.  I was working for IBM around 2005 and we were visiting UCS talking about xCAT and supercomputers and the admins there started telling us about Linux containers.  or LXCs.  It was based on the idea of a Union File System.  Basically, you could overlay files on top of each other in layers.  So let’s say your base operating system had a file called /tmp/iamafile with the contents “Hi, I’m a file”.  You could create a container (which I have heard explained as a chroot environment on steroids, cause its basically mapped over the same root).  In this container, you could open the file /tmp/iamafile and change the contents to “I am also a file modified”.  Now that file will get a Copy-on-Write.  Meaning, only the container will see the change.  It will also save the change.  But the basic underlying file on the root operating system sees no change.  Its still the same file that says “Hi, I’m a file”.  Only the instance in the container has changed.

Its not just files that can be contained in the container.  Its also processes.  So you can run a process in the container and run it in its own environment.

That technology, while super cool, seemed to be relegated to the cute-things-really-geeky-people-do category.  I dismissed it and never thought about it again until I saw Docker was using it.

Why Docker made Containers Cool

Here are the things Docker gave that made it so cool, and why it will disrupt many businesses:

1.  They created a build process to create containers.

How do most people manage VMs?  Well, they make this golden image of a server.  They put all the right things in it.  Some more advanced people will script it, but more often than not, I just see some blessed black box.

Our blessed black box

We don’t know how this image came to be, or what’s in it.  But it’s blessed.  It has all our security patches, configuration files, etc.  Hopefully that guy who knows how to build it all doesn’t quit.

This is not reproducible.  The cool kids then say: “We’ll use Puppet because it can script everything for us”  Then they talk about how they know this cool puppet file language that no one else knows and feel like puppet masters.  So they build configuration management around the image.

With Docker this is not necessary.  I just create a Dockerfile that has all the things in it a puppet script or chef recipe had in it and I’m done.  But not only that, I can see how the image was created.  I have a blueprint that syntactically is super easy.  There are only 12 syntax lines.  Everything else is just how you would build the code.

We also don’t try to abstract things like Puppet and Chef do.  We just say we want the container to install nginx and it does by doing:

RUN apt-get -y install nginx

That’s it.  Super easy to tell what’s going on.  Then you just build your image.

2. Containers are Modular

The build process with Docker is incredible.  When I first looked at it, I thought: Great, my Dockerfile is going to look like a giant monolithic kickstart postscript that I used to make when installing Red Hat servers.

But that’s not how it works.  Containers build off of each other.  For my application I’m working on now, I have an NGINX container, a ruby container, an app server container, and my application container sitting on that.  Its almost like having modules.  The modules just sit on top of each other.  Turtles all the way down.

This way I can mix and match different containers.  I might want to reuse my NGINX container with my python app.

3.  Containers are in the Cloud: Docker Hub

What github did for code, docker has done for system management.  I can browse how other people build nginx servers.  I can see what they do when I look at Dockerhub.  I can even use their images.  But while that’s cool and all, the most important aspect?  I can download those containers and run them anywhere.

You have an AWS account?  You can just grab a docker image and run it there.  Want to run it on Digital Ocean?  OpenStack? VMware?  Just download the docker image.  It’s almost like we put your VM templates in the cloud and can pull them anywhere.

What this gives us is app portability.  Amazing app portability.  All I need is a generic Ubuntu 14.04 server and I can get my application running on it faster than anyway I’ve been able to do before.

How many different kind of instances would I need in AWS?  How many templates in my vCenter cluster?  Really, just one:  One that is ready for docker to run.

Who Gets Disrupted?

So now that we have extreme portability we start to see how this new model can disrupt all kinds of “value added” technologies that other companies have tried to fill the void on.

Configuration tools – Puppet, Chef, I don’t need to learn you anymore.  I don’t need your agents asking to update and checking in anymore.  Sure, there are some corner cases, but for the most part, I’ll probably just stick with a push method tool like Ansible that can configure just a few commands to get my server ready for Docker.  The rest of the configuration is done in the Dockerfile.

Deployment tools  – I used Capistrano for deploying apps on a server.  I don’t need to deploy you any more.  Docker images do that for me.

VMware – Look at this blog post from VMware and then come back and tell me why Docker needs VMware?  Chris Wolf tries to explain how the vCloud Suite can extend the values of containers.  None of those seem to be much value to me.  Just as VMware has tried to commoditize the infrastructure, Docker is now commoditizing the virtual infrastructure.

AWS – Basically any cloud provider is going to get commoditized.  The only way AWS or Google App Engine or Azure can get you to stick around is by offering low prices OR get you hooked on their APIs and services.  So if you are using DynamoDB, you can’t get that anywhere but AWS, so you are hooked.  But if I write my container that has that capability, I can move it to any cloud I want.  This means its up to the cloud providers to innovate.  It will be interesting to hear what more AWS says about containers at Re:Invent next week.

Who Can Win?

Docker is already going to win.  But more than Docker, I  think Cisco has potential here.  Cisco wants to create the Intercloud.  What a better way to transport workloads through the inter cloud than with docker containers.  Here is where Cisco needs to execute:

1.  They have to figure out networking with Docker.   Check out what Cisco has done with the 1000v on KVM.  It works now, today with containers.  But there’s more that needs to be done.  A more central user friendly way perhaps?  A GUI?

2.  They have to integrate it into Intercloud Director and somehow get the right hooks to make it even easier.   Inter cloud Director is off to a pretty good start.  It actually can transport VMs from your private data center to AWS, Azure, and DimentionData clouds, but it has a ways more to go.  If we had better visibility into the network utilization of our containers and VMs both onprem and offprem, we would really have a great solution.

What about Windows?

Fine, you say.  All good, but our applications run on Microsoft Servers.  So this won’t work for us.  Well, you probably missed this announcement.  So yeah, coming soon, to a server near you.

Conclusion

Its 2014.  Docker is about a year old.  I think we’re going to hear lots more about it.  Want another reason why its great?  Its open source! So start playing with Docker now.  You’ll be glad you got ahead of the curve.

 

On reading about choosing between NSX and ACI

I consider myself  very fortunate to work in the IT industry.  Not only do I get to develop and deploy technologies that enhance the world we live in, but I also get more drama from the different companies than a soap opera.  Take for example the story of how Jayshree left Cisco to help build Arista.  There’s also the story of how VMware bought Nicira and caused disruption with the EMC Cisco partnership. None of these stories do I know the full extent of.  I’m just a spectator and focus day to day on my own activities and try to do things that matter to organizations.

But like a spectator watching the Golden Bears win or lose on any given week in college football, I’m entitled to my opinions as well.  In fact, everybody is.  I tell this to my kids all the time.  This quote from Steve Jobs nails it:

Life can be much broader once you discover one simple fact: Everything around you that you call life was made up by people that were no smarter than you and you can change it, you can influence it, you can build your own things that other people can use. “

NSX and ACI were made by very smart people.  But people that have opinions about it and have blogs like the one you’re reading now, aren’t necessarily any smarter than you.  We try to influence opinons, and some have been more successful than others.  Brad has an excellent blog and I’ve learned a lot from it.  But like a U2 album, not every one of their songs is a hit.

My latest opinion on his article about On Choosing VMware NSX or Cisco ACI is that someone is wrong on the Internet.

Duty Calls from xkcd

In a big part of the article, Brad compares a physical network switch to a TV stand and the television to what NSX does.  He then compares ACI to an adjustable TV stand, complete with remote.   He then says:

“You’ll also need to convince people that it makes more sense to buy televisions from an electronics company; and television stands should be bought from a television stand company.”

Umm.  Not quite.  This overlooks all the values ACI brings.

Let’s liken NSX to a network overlay, which is what it is.  Let’s liken the Nexus 9000 in ACI mode to a network switch that has overlay technology built in, which is what it is.  It’s real simple:  With NSX you manage 2 networks.  With ACI you manage one integrated network.

And you manage both with software.   With ACI you put each server into an endpoint group.  They are either physical or virtual.  You can still use the same VMware DVS with ACI.  It then encapsulates that VLAN or VXLAN into an endpoint group and allows those groups to talk to each other in the fabric.

Here’s another analogy.  NSX is like a cute Christmas sweater on a nice day.  Sure, you’ll get a lot of people to look at it.  You’ll get some laughs and some comments that will make you feel good.  But what’s important is the programability of the system.  And on warm days, you really don’t need or use that cute outer sweater.

the Joy of using NSX

I will concede the NSX GUI looks great!  VMware has always done a great job of making things look good and there’s a reason that VMware is the number one hypervisor in the industry.  But companies evolve.    VMware evolves into networking.  Cisco evolves into software.  So does your organization.  Your organization needs solid APIs if you want to program everything.  So if we’re doing it this way, we don’t need a sexy GUI to automate all of this.  I need those solid APIs.  Since Cisco introduced UCS its API business has been serious.  In fact, what other x86 platform has a more solid  API than UCS?   As Cisco continues to invest in software to drive its products, ACI has become that next big thing.  But it’s a whole new paradigm of network.  Gone are VLANs.  All we care about now is how applications connect.  It’s all object oriented now and it’s simple.

A Software Company versus a Hardware Company

This part is great.  Brad then puts 2 quotes from VMware employees about why they think NSX is going to win in the marketplace.  This one from the CEO of Nicira: “Who do you think is going to make better software, a software company or a hardware company?”

Is Apple a hardware company or a software company?  Is Cisco a hardware company or a software company?  You see, only a Sith deals in absolutes.  Cisco is a solutions company.

This is what John Chambers, the Cisco CEO, keeps trying to tell everyone:  It’s the solution that matters.  It’s companies that see the whole vision of the architecture and can make all those pieces work together.  That is who wins.

I don’t think Cisco has that down perfect yet.  I don’t think VMware does either.  But we are working towards it.

The Network Effect

Both Cisco and VMware keep touting how many people are using their SDN technology.  There is a sense of urgency with both companies to make everyone believe that everyone else is jumping on board.  It reminds me of when I was hosting my 20 year high school reunion this past summer.  People would ask me:  “How many people are going?”  And I’d say something like: “Oh, man we have at least 50 tickets sold and tons more who said they’ll come”.  In reality, many of those tickets were given to people on the committee and I had about 2 other people that said they would go.  You see, the network effect is huge and both companies know it.  So they have to make it sound like everyone is doing it.  Then, you are in your IT shop and you’re saying:  How come I’m not doing this?  No one likes to feel like they are missing out.

And for the record:  The 20 year reunion was amazing.  We had well over 150 people there.

Security

Zero Trust micro-segmentation seems is a cool thing.  If you have 10 web servers in the same group then you’d like to keep those secure.  How do we do this with ACI?  We put all the servers in what we call an End Point Group (EPG) which allows ports or IP addresses or other EPGs to talk with it.  This is similar to how with AWS we create Security Groups and can assign them to instances.  Some other cloud providers like Digital Ocean and Softlayer don’t have these features so in Linux instances we use things like iptables or ufw  to secure our instances.

Since we want to secure and automate the entire environment, I’ve been playing with things like Docker and Ansible to create these secure instances and lock them down.  Open source tools to solve problems.  So while it’s a nice feature, it’s not going to apply in every case.  And how long before ACI has it?  Probably before most people adopt ACI or NSX to begin with.

VMware and OpenStack

One last comparison:  VMware is to OpenStack as Microsoft is to Linux.  I’ll just leave it at that.

The Promise Land

The promise land is open.  It’s a place where I can take my applications from my own data centers and migrate them to any cloud provider I want.   This is the vision of Cisco’s Intercloud.  Use the best of public cloud and marry it with the private cloud.  It’s fast and it’s agile and it’s programmable.

I’ll end with this:  Keep in mind that both of these technologies are still pretty fresh.  If I look at my customer set, I have quite a few Nexus 9000s but few ACI customers.  I also have lots of customers that are looking at NSX and ACI, but none of them have deployed it in test let alone production environments.  Now, my market here in the pacific northwest is a micro slice of the picture, and I’m sure Brad sees a lot more from his vantage point.  But if you haven’t jumped on any bandwagon yet (like I’d say 95% or more of IT have not), let me just say this:

You can buy Cisco Nexus 9000s.  They make a great 40Gb switch and have great features including programability RESTful APIs, and python extensions.  It outperforms its competition on Power, Performance, Programmability, and Price.  You can try running NSX over them and you can try running them in ACI mode.  The choice is yours but you lose nothing and gain so much in moving to the Nexus 9k environment.    Its not just an adjustable TV stand.  It’s the whole solution:  The remote, the TV, and the stand, and the room you watch it in.  It’s the whole experience.

You see the winner isn’t who comes up with the best software,  it’s who can produce the best experience.