Cisco, Cluster on Die, and the E5-2600v3

I was in a meeting where someone noted that VMware does not support the Cluster on Die feature of the new Intel Haswell E5-2600v3 processors.  The question came: “What performance degradation do we get by not having this supported?  And would it be better to instead use the M3 servers”

This information they got from Cisco’s whitepaper on recommended BIOS settings for the new B200 M4, C240 M4, and C220 M4 servers.

The snarky answer is: You’re asking the wrong question.  The nice answer is: “M4 will give better performance than M3″.

Let’s understand a few things.

1.  Cluster on Die (CoD) is a new feature.  It wasn’t in the previous v2 processors.  So assuming all things equal, running the v3 without it just means that you don’t get whatever increased ‘goodness’ it will provide.

2.  Understand what CoD does.   This article does a good job pointing out what it does.  Look at it this way:  As each socket gets more and more cores in each iteration of Intel’s new chips, you have a lot of cores looking into the same memory bank.  It starts making it so latency goes up to find the correct data.  CoD carves up the regions so data to the cores stay somewhat more coherent.  Almost like giving each core its own bank of cache and memory.  This helps the latency go down as more cores are added.

To sum it up:  If I have a C240 M3 with 12 cores per processor and I compare it to a C240 M4 with 12 cores per processor with CoD disabled, then I still have the same memory contention problem with the M4 cores that I did with the M3s.  When VMware eventually supports CoD, then you just get a speed enhancement.

 

Unauthorized UCS Invicta usage

I had a UCS Invicta that I needed to change from a scaling appliance (where more than one invicta node is managed by several others) to a stand alone appliance.  I thought this could be done in the field but at this point I don’t think its an option still.  The below info is just some things I tried in a lab and I’m pretty sure are not supported by Cisco.

While the UCS Invicta appliance has had its share of negative press and setbacks, I still have pretty high hopes for this flash array.  Primarily because the operating system is so well designed for solid state media.  Its different, full featured, and I think offers a lot of value for anyone in need of fast storage. (everyone?)

First step was to take out the Infiniband card and put in the fibre channel cards.  I put it in the exact spot the IB card was in.

putting the FC card in the Invicta

CIMC

I had to change IP address.  Booted up.  Pressed F8, changed the IP address to 10.1.1.36/24 and didn’t touch the password.

The default CIMC password to login was admin / d3m0n0w! (source)

Console

It takes forever to boot up because there is no InfiniBand card in the appliance anymore.  I took it out and replaced with a fibre channel HBA that came in one of the SSRs that managed the nodes.

The default console user/password is  console / cisco1 (source)

This didn’t work for me I thought because whoever had the system before me changed it.  Fortunately for me, this is Linux.  Rebooted the system and added ‘single’ to the grub menu.   Yes, I securely compromised the system.  Such is the power of the Linux user.

IMG_6674

Once on the command line I entered the passwd command to change the root password into something I could log in with.  I took a look at the /etc/passwd table and noticed that there was no console user.  Hmm, guess that’s why I couldn’t log in.

I then rebooted.

Reformatting

There were three commands that looked pretty promising.  After digging around I found:

/usr/local/tools/invicta/factoryreset/

/usr/local/tools/invicta/menu  and the accompanying   menu.sh  command.

and

/usr/local/tools/invicta/support/secureWipe.sh 

I tried all these and then rebooted the system.  Nothing seemed to indicate any progress as to turning this node into a stand alone appliance.  I have a few emails out and if I figure it out, I’ll post an update. 

SSH tunnel for Adium Accounts

Sometimes you’ll be at a place where your IRC client can’t connect to something like irc.freenode.net.  This is where ‘behind corporate firewalls’ turns into ‘behind enemy lines’.  IRC is invaluable because there are people who can help you with your problems… but they are on the outside.

Here is how I solve that for a Mac OS running Adium as my IRC client.

First, you need a place on the internet you can ssh into.  If you have an AWS machine, that can work.  You run the following command:

ssh -D 127.0.0.1:3128 foo@some-server.com

This opens up the proxy connection.  Next on Adium, when you look at the accounts, you can input the following:

Adium proxy settings for SSH

Doing this allows me to see things on the #docker, #xcat IRC channels.

Ansible: From OSX to AWS

My goal in this post is to go from 0 to Ansible installed on my Mac and then be able to provision AWS instances ready to run Docker containers.  The code for this post is public on my github account.

OSX Setup

I am running OS X Yosemite.  I use brew to make things easy.  Install homebrew first.  This makes it easy to install Ansible:

brew install ansible

I have one machine up at AWS right now.  So let’s test talking to it.  First, we create the hosts file:

Now we put in our host:

instance1

I can do it this way because I have a file ~/.ssh/config  that looks like the following:

Now we can test:

Where to go next? I downloaded the free PDF by @lhochstein that has 3 chapters that goes over the nuts and bolts of Ansible so I was ready to bake an image.  But first let’s look at how Ansible is installed on OS X:

The default hosts file is, as we already saw, in /usr/local/etc/ansible/hosts .  We also have a config file we can create in ~/.ansible.cfg .  More on that later.

The other thing we have is the default modules that shipped with Ansible.  These are located in /usr/local/Cellar/ansible/1.7.2/share/ansible/  (if you’re running my same version)

If you look in this directory and subdirectories you’ll all the modules that Ansible comes with.  I think all of these modules have documentation in the code, but the easiest way to read the documentation is to run

ansible-doc <module-name>

Since we need to provision instances then we can look at the ec2 module:

This gives us a lot of information on modules you can use to deploy ec2 instances.

An nginx Playbook

Let’s take a step back and do something simple like deploy nginx on our instance using an Ansible Playbook.

I create an ansible file called ~/Code/ansible/nginx.yml .  The contents are the following:

I then created the file  ~/Code/ansible/files/nginx.conf

Finally, I created the ~/Code/ansible/files/index.html

With this I run the command:

ansible-playbook nginx.yml
If you are lucky, you have cowsay installed.  If so, then you get the cow telling you what’s happening.  If not, then you can install it:
brew install cowsay
Now, navigate to the IP address of the instance, and magic!  You have a web server configured by Ansible.  You can already see how useful this is!  Now, configuring a web server on an AWS instance is not todays hotness.  The real hotness is creating a docker container that runs a web server.  So we can just tell Ansible to install docker.  From there, we would just install our docker containers and run them.

A Docker Playbook

In one of my previous blog entries, I showed the steps I took to get docker running on an Ubuntu image.  Let’s take those steps and put them in an Ansible playbook:

Here we use some of the built in modules from Ansible that deal with package management.   You can see the descriptions and what’s available by reading Ansible’s documentation.

We run this on our host:

ansible-playbook -vvv docker.yml 

And now we can ssh into the host and launch a container:

sudo docker run --rm -i -t ubuntu /bin/bash

This to me is the ultimate way to automate our infrastructure:  We use Ansible to create our instances.  We use Ansible to set up the environment for docker, then we use Ansible to deploy our containers.

All the work for our specific application settings is done with the Dockerfile.

Provisioning AWS Machines

Up until now, all of our host information has been done with one host: instance1 that we configured in our hosts file.  Ansible is much more powerful than that.   We’re going to modify our ~/.ansible.cfg  file to point to a different place for hosts:

This uses my AWS keypair for logging into the remote servers I’m going to create.  I now need to create the inventory directory:

mkdir ~/Code/Ansible/inventory

Inside this directory I’m going to put a script: ec2.py.  This script comes with Ansible but the one that came with my distribution didn’t work.


The ec2.py file also expects an accompanying ec2.ini file:

You can modify this to suit your environment.  I’m also assuming you have boto installed already and a ~/.boto file.  If not, see how I created mine here.

Let’s see if we can now talk to our hosts:

ansible all -a date
Hopefully you got something back that looked like a date and not an error.   The nodes returned from this list will all be in the ec2 group.  I think there is a way to use tags to further make them distinct, but I haven’t had a chance to do that yet.
We now need to lay our directory structure out for something a little bigger.  The best practices for this is listed here.  My project is a little more simple as I only have my ec2 hosts and I’m just playing with them.  This stuff can get serious.  You can explore how I lay out my directories and files by viewing my github repository.
The most interesting file of the new layout is my ~/Code/Ansible/roles/ec2/tasks/main.yml file.  This file looks like the below:

I use a variable file that has these variables in the {{ something }} defined.  Again, check out my  github repo.   This file provisions a machine (similar to the configuration from my python post I did) and then waits for SSH to come up.

In my root directory I have a file called site.yml that tells the instances to come up and then go configure the instances.  Can you see how magic this is?

we run:

ansible-playbook site.yml

This makes Ansible go and deploy one ec2 host.  It waits for it to become available, and then it ssh’s into the instance and sets up docker.  Our next step would be to create a few docker playbooks to run our applications.  Then we can completely create our production environment.

One step closer to automating all the things!

If you found errors, corrections, or just enjoyed the article, I’d love to hear from you: @vallard.

 

Remove an EC2 host with Ansible

I spent forever trying to understand the built in Ansible ec2 modules.  What I was trying to figure out seemed simple:  How do you delete all your ec2 instances?  Nothing too clever.  Turns out it was easier to create the instances than delete it (for me anyway).  So I’m writing this down here so I remember.

I’ve also created an Ansible playbook github repository where I’ll be putting all the stuff I use.   I also plan on doing a post where I show how my environment was set up.

For now, here is the playbook:

The tricky part for me was trying to get the data from the ec2_facts module into the rest of the playbook.  It turns out that ec2_facts loads the data into the hostvars[inventory_hostname] variable.  I was looking for instance_id as the variable, but it didn’t populate this.  This may be that I have an older version of Ansible (1.2.7) that comes installed with homebrew on the mac.

If you find this ec2_id variable doesn’t work for getting the instance ID of the AWS instance, then take a look at the debug statements to see what is populated in the hostsvars[inventory_hostname]

I should also point out that the host group that it uses above ‘ec2′ is from the ec2.py executable that comes with Ansible that I put in my inventory directory.

More on that in a coming post.

Why Docker Changes everything

There are many shifts we talk about in IT.  Here are 2 recent examples come to mind that most people are familiar with:

  • The shift from physical to virtual
  • The shift from on-prem infrastructure to cloud based consumption IT.

Ultimately it is an organization with a clear vision that disrupts the status quo and creates a more efficient way of doing things.  Those 2 examples were the result of the brilliant people who started VMware and Amazon Web Services (AWS)

VMware was incredible and made it so easy.  How many vMotion demos did you see?  They nailed it and their execution was outstanding.  And so vCenter with ESX still stands as their greatest achievement.  But unfortunately, nothing they’ve done since has really mattered.  Certainly they are pounding the drum on NSX, but the implications and the payoffs are nothing like an organization could get from when they went from p2v (physical to virtual).   VCAC or vRealize, or whatever they decided to rename it this quarter is not catching on.  And vCheese, or vCHS, or vCloud Air (cause ‘Air’ worked for Apple) isn’t all that either.  I don’t have any customers using it.  I will admit, I’m pretty excited about VSAN, but until the get deduplication, its just not that hot.  And solutions from providers like SimpliVity have more capabilities.   But there can be no doubt that vSphere/ESX is their greatest success.

AWS disrupted everybody.  They basically killed the poor hardware salesman that worked in the SMB space.   AWS remains far in the leadership quadrant.

My customers, even the ones I could argue that are not on the forefront of technology, are migrating to AWS and Azure.  Lately, I’ve been reading more and more about Digital Ocean.  I remember at IBM back in 2003 hearing about Utility Computing.  We even opened an OnDemand Center, where customers could rent compute cycles.  If we had the clear vision and leadership that AWS had, we could have made it huge at that time.  (Did I just imply that IBM leadership missed a huge opportunity?  I sure did.)

Well today we have another company that has emerged on the forefront of technology that is showing all others the way and will soon cause massive disruption.  That company is Docker.  Docker will soon be found on every public and private cloud.  And the implications are huge.

What are Containers?

I am so excited about Docker I can barely contain myself.  Get it?  Contain.. Docker is a container?  No?  Ok, let me explain Docker.  I was working for IBM around 2005 and we were visiting UCS talking about xCAT and supercomputers and the admins there started telling us about Linux containers.  or LXCs.  It was based on the idea of a Union File System.  Basically, you could overlay files on top of each other in layers.  So let’s say your base operating system had a file called /tmp/iamafile with the contents “Hi, I’m a file”.  You could create a container (which I have heard explained as a chroot environment on steroids, cause its basically mapped over the same root).  In this container, you could open the file /tmp/iamafile and change the contents to “I am also a file modified”.  Now that file will get a Copy-on-Write.  Meaning, only the container will see the change.  It will also save the change.  But the basic underlying file on the root operating system sees no change.  Its still the same file that says “Hi, I’m a file”.  Only the instance in the container has changed.

Its not just files that can be contained in the container.  Its also processes.  So you can run a process in the container and run it in its own environment.

That technology, while super cool, seemed to be relegated to the cute-things-really-geeky-people-do category.  I dismissed it and never thought about it again until I saw Docker was using it.

Why Docker made Containers Cool

Here are the things Docker gave that made it so cool, and why it will disrupt many businesses:

1.  They created a build process to create containers.

How do most people manage VMs?  Well, they make this golden image of a server.  They put all the right things in it.  Some more advanced people will script it, but more often than not, I just see some blessed black box.

Our blessed black box

We don’t know how this image came to be, or what’s in it.  But it’s blessed.  It has all our security patches, configuration files, etc.  Hopefully that guy who knows how to build it all doesn’t quit.

This is not reproducible.  The cool kids then say: “We’ll use Puppet because it can script everything for us”  Then they talk about how they know this cool puppet file language that no one else knows and feel like puppet masters.  So they build configuration management around the image.

With Docker this is not necessary.  I just create a Dockerfile that has all the things in it a puppet script or chef recipe had in it and I’m done.  But not only that, I can see how the image was created.  I have a blueprint that syntactically is super easy.  There are only 12 syntax lines.  Everything else is just how you would build the code.

We also don’t try to abstract things like Puppet and Chef do.  We just say we want the container to install nginx and it does by doing:

RUN apt-get -y install nginx

That’s it.  Super easy to tell what’s going on.  Then you just build your image.

2. Containers are Modular

The build process with Docker is incredible.  When I first looked at it, I thought: Great, my Dockerfile is going to look like a giant monolithic kickstart postscript that I used to make when installing Red Hat servers.

But that’s not how it works.  Containers build off of each other.  For my application I’m working on now, I have an NGINX container, a ruby container, an app server container, and my application container sitting on that.  Its almost like having modules.  The modules just sit on top of each other.  Turtles all the way down.

This way I can mix and match different containers.  I might want to reuse my NGINX container with my python app.

3.  Containers are in the Cloud: Docker Hub

What github did for code, docker has done for system management.  I can browse how other people build nginx servers.  I can see what they do when I look at Dockerhub.  I can even use their images.  But while that’s cool and all, the most important aspect?  I can download those containers and run them anywhere.

You have an AWS account?  You can just grab a docker image and run it there.  Want to run it on Digital Ocean?  OpenStack? VMware?  Just download the docker image.  It’s almost like we put your VM templates in the cloud and can pull them anywhere.

What this gives us is app portability.  Amazing app portability.  All I need is a generic Ubuntu 14.04 server and I can get my application running on it faster than anyway I’ve been able to do before.

How many different kind of instances would I need in AWS?  How many templates in my vCenter cluster?  Really, just one:  One that is ready for docker to run.

Who Gets Disrupted?

So now that we have extreme portability we start to see how this new model can disrupt all kinds of “value added” technologies that other companies have tried to fill the void on.

Configuration tools – Puppet, Chef, I don’t need to learn you anymore.  I don’t need your agents asking to update and checking in anymore.  Sure, there are some corner cases, but for the most part, I’ll probably just stick with a push method tool like Ansible that can configure just a few commands to get my server ready for Docker.  The rest of the configuration is done in the Dockerfile.

Deployment tools  – I used Capistrano for deploying apps on a server.  I don’t need to deploy you any more.  Docker images do that for me.

VMware – Look at this blog post from VMware and then come back and tell me why Docker needs VMware?  Chris Wolf tries to explain how the vCloud Suite can extend the values of containers.  None of those seem to be much value to me.  Just as VMware has tried to commoditize the infrastructure, Docker is now commoditizing the virtual infrastructure.

AWS – Basically any cloud provider is going to get commoditized.  The only way AWS or Google App Engine or Azure can get you to stick around is by offering low prices OR get you hooked on their APIs and services.  So if you are using DynamoDB, you can’t get that anywhere but AWS, so you are hooked.  But if I write my container that has that capability, I can move it to any cloud I want.  This means its up to the cloud providers to innovate.  It will be interesting to hear what more AWS says about containers at Re:Invent next week.

Who Can Win?

Docker is already going to win.  But more than Docker, I  think Cisco has potential here.  Cisco wants to create the Intercloud.  What a better way to transport workloads through the inter cloud than with docker containers.  Here is where Cisco needs to execute:

1.  They have to figure out networking with Docker.   Check out what Cisco has done with the 1000v on KVM.  It works now, today with containers.  But there’s more that needs to be done.  A more central user friendly way perhaps?  A GUI?

2.  They have to integrate it into Intercloud Director and somehow get the right hooks to make it even easier.   Inter cloud Director is off to a pretty good start.  It actually can transport VMs from your private data center to AWS, Azure, and DimentionData clouds, but it has a ways more to go.  If we had better visibility into the network utilization of our containers and VMs both onprem and offprem, we would really have a great solution.

What about Windows?

Fine, you say.  All good, but our applications run on Microsoft Servers.  So this won’t work for us.  Well, you probably missed this announcement.  So yeah, coming soon, to a server near you.

Conclusion

Its 2014.  Docker is about a year old.  I think we’re going to hear lots more about it.  Want another reason why its great?  Its open source! So start playing with Docker now.  You’ll be glad you got ahead of the curve.

 

On reading about choosing between NSX and ACI

I consider myself  very fortunate to work in the IT industry.  Not only do I get to develop and deploy technologies that enhance the world we live in, but I also get more drama from the different companies than a soap opera.  Take for example the story of how Jayshree left Cisco to help build Arista.  There’s also the story of how VMware bought Nicira and caused disruption with the EMC Cisco partnership. None of these stories do I know the full extent of.  I’m just a spectator and focus day to day on my own activities and try to do things that matter to organizations.

But like a spectator watching the Golden Bears win or lose on any given week in college football, I’m entitled to my opinions as well.  In fact, everybody is.  I tell this to my kids all the time.  This quote from Steve Jobs nails it:

Life can be much broader once you discover one simple fact: Everything around you that you call life was made up by people that were no smarter than you and you can change it, you can influence it, you can build your own things that other people can use. “

NSX and ACI were made by very smart people.  But people that have opinions about it and have blogs like the one you’re reading now, aren’t necessarily any smarter than you.  We try to influence opinons, and some have been more successful than others.  Brad has an excellent blog and I’ve learned a lot from it.  But like a U2 album, not every one of their songs is a hit.

My latest opinion on his article about On Choosing VMware NSX or Cisco ACI is that someone is wrong on the Internet.

Duty Calls from xkcd

In a big part of the article, Brad compares a physical network switch to a TV stand and the television to what NSX does.  He then compares ACI to an adjustable TV stand, complete with remote.   He then says:

“You’ll also need to convince people that it makes more sense to buy televisions from an electronics company; and television stands should be bought from a television stand company.”

Umm.  Not quite.  This overlooks all the values ACI brings.

Let’s liken NSX to a network overlay, which is what it is.  Let’s liken the Nexus 9000 in ACI mode to a network switch that has overlay technology built in, which is what it is.  It’s real simple:  With NSX you manage 2 networks.  With ACI you manage one integrated network.

And you manage both with software.   With ACI you put each server into an endpoint group.  They are either physical or virtual.  You can still use the same VMware DVS with ACI.  It then encapsulates that VLAN or VXLAN into an endpoint group and allows those groups to talk to each other in the fabric.

Here’s another analogy.  NSX is like a cute Christmas sweater on a nice day.  Sure, you’ll get a lot of people to look at it.  You’ll get some laughs and some comments that will make you feel good.  But what’s important is the programability of the system.  And on warm days, you really don’t need or use that cute outer sweater.

the Joy of using NSX

I will concede the NSX GUI looks great!  VMware has always done a great job of making things look good and there’s a reason that VMware is the number one hypervisor in the industry.  But companies evolve.    VMware evolves into networking.  Cisco evolves into software.  So does your organization.  Your organization needs solid APIs if you want to program everything.  So if we’re doing it this way, we don’t need a sexy GUI to automate all of this.  I need those solid APIs.  Since Cisco introduced UCS its API business has been serious.  In fact, what other x86 platform has a more solid  API than UCS?   As Cisco continues to invest in software to drive its products, ACI has become that next big thing.  But it’s a whole new paradigm of network.  Gone are VLANs.  All we care about now is how applications connect.  It’s all object oriented now and it’s simple.

A Software Company versus a Hardware Company

This part is great.  Brad then puts 2 quotes from VMware employees about why they think NSX is going to win in the marketplace.  This one from the CEO of Nicira: “Who do you think is going to make better software, a software company or a hardware company?”

Is Apple a hardware company or a software company?  Is Cisco a hardware company or a software company?  You see, only a Sith deals in absolutes.  Cisco is a solutions company.

This is what John Chambers, the Cisco CEO, keeps trying to tell everyone:  It’s the solution that matters.  It’s companies that see the whole vision of the architecture and can make all those pieces work together.  That is who wins.

I don’t think Cisco has that down perfect yet.  I don’t think VMware does either.  But we are working towards it.

The Network Effect

Both Cisco and VMware keep touting how many people are using their SDN technology.  There is a sense of urgency with both companies to make everyone believe that everyone else is jumping on board.  It reminds me of when I was hosting my 20 year high school reunion this past summer.  People would ask me:  “How many people are going?”  And I’d say something like: “Oh, man we have at least 50 tickets sold and tons more who said they’ll come”.  In reality, many of those tickets were given to people on the committee and I had about 2 other people that said they would go.  You see, the network effect is huge and both companies know it.  So they have to make it sound like everyone is doing it.  Then, you are in your IT shop and you’re saying:  How come I’m not doing this?  No one likes to feel like they are missing out.

And for the record:  The 20 year reunion was amazing.  We had well over 150 people there.

Security

Zero Trust micro-segmentation seems is a cool thing.  If you have 10 web servers in the same group then you’d like to keep those secure.  How do we do this with ACI?  We put all the servers in what we call an End Point Group (EPG) which allows ports or IP addresses or other EPGs to talk with it.  This is similar to how with AWS we create Security Groups and can assign them to instances.  Some other cloud providers like Digital Ocean and Softlayer don’t have these features so in Linux instances we use things like iptables or ufw  to secure our instances.

Since we want to secure and automate the entire environment, I’ve been playing with things like Docker and Ansible to create these secure instances and lock them down.  Open source tools to solve problems.  So while it’s a nice feature, it’s not going to apply in every case.  And how long before ACI has it?  Probably before most people adopt ACI or NSX to begin with.

VMware and OpenStack

One last comparison:  VMware is to OpenStack as Microsoft is to Linux.  I’ll just leave it at that.

The Promise Land

The promise land is open.  It’s a place where I can take my applications from my own data centers and migrate them to any cloud provider I want.   This is the vision of Cisco’s Intercloud.  Use the best of public cloud and marry it with the private cloud.  It’s fast and it’s agile and it’s programmable.

I’ll end with this:  Keep in mind that both of these technologies are still pretty fresh.  If I look at my customer set, I have quite a few Nexus 9000s but few ACI customers.  I also have lots of customers that are looking at NSX and ACI, but none of them have deployed it in test let alone production environments.  Now, my market here in the pacific northwest is a micro slice of the picture, and I’m sure Brad sees a lot more from his vantage point.  But if you haven’t jumped on any bandwagon yet (like I’d say 95% or more of IT have not), let me just say this:

You can buy Cisco Nexus 9000s.  They make a great 40Gb switch and have great features including programability RESTful APIs, and python extensions.  It outperforms its competition on Power, Performance, Programmability, and Price.  You can try running NSX over them and you can try running them in ACI mode.  The choice is yours but you lose nothing and gain so much in moving to the Nexus 9k environment.    Its not just an adjustable TV stand.  It’s the whole solution:  The remote, the TV, and the stand, and the room you watch it in.  It’s the whole experience.

You see the winner isn’t who comes up with the best software,  it’s who can produce the best experience.

Docker

I’m finally jumping in on the Docker bandwagon and it is pretty exciting.  Here’s how I did a quick trial of it to make it work.

Install OS

I could do this in my AWS account, or I can do it with my local private cloud.  So many options these days.  I installed the latest Ubuntu 14.04 Trusty server on my local cloud.  It probably would have been just as easy to spin it up on AWS.

I had to get my proxy set up correctly before I could get out to the Internet.  This was done by editing /etc/apt/apt.conf.  I just added the one line:

Configure for Docker

I followed the docker documentation on this.  Everything was pretty flawless.  I ran:

That last command had problems. The error I got said:

This is because I need to put a proxy on my docker configuration. A quick google, search lead me to do:

I added my proxy host

Rerunning

And I see a ton of stuff get downloaded. Looks like it works!

Now I have docker on my VM.  But what can we do with it?  Containers are used for applications. Creating a python application would probably be a good start. I searched around and found a good one on Digital Ocean’s site.

I like how it showed how to detach: CTRL-P and CTRL-Q. To reattach:

Get the image ID, then reattach

Docker is one of many interesting projects I’ve been looking at lately. For my own projects, I’ll be using Docker more for easing deployment. If you saw from my article yesterday, I’ve been working on public clouds as well as internal clouds. Connecting those clouds together and letting applications migrate between them is where I’ll be spending a few more hours on this week and next.

IP Masquerading (NAT in Red Hat)

In my lab I have a server that is dual homed.  It is connected to the outside network on one interface (br0) and the internal network (br1) is connected to the rest of my VM cluster.

I want the VMs to be able to get outside.  So the way I did that (on RedHat) was to create a few IP table rules.  I’ve been doing this for 10+ years now, but keep forgetting syntax.

So here it is:

Then, of course, you have do enable forwarding in the /etc/sysctrl.conf

Finally, run

for those changes to take effect.

AWS with python

In my previous post, I set up my Mac to do python development.  Now I’m ready to launch an EC2 image using boto.

Create Virtual Python Environment

I have already logged into AWS and created an account.  I gave it a credit card and now I get a free server for a year that I can do all sorts of fun things.

First, I’m going to create a virtual environment for this:

Connect to AWS

So now we have the base setup.  I checked the documentation for AWS here: http://boto.readthedocs.org/en/latest/ec2_tut.html

It said I needed some security credentials, but I noticed that I don’t have any.  I logged onto AWS and I created a user for my account named tester.  From there, I saw the access key ID and secret access key.  After that, I had to give him administration permissions.

I ran the command as shown in the document:

We are in!

Launch an instance

From the AWS management console when we look at the image store, I found the Ubuntu 14.04 LTS was available in the free tier. Next to the name, I found the instance id: ami-3d50120d. So that’s the one I’m going to try to launch. We also need to set the size. From the console, I found that the t2.micro instance was also free. So I gave that a whirl.  Let’s get the image going and then check how its doing.

I found that I could then see what the status was again by looking a little later:

Notice, I had to refresh the statuses variable.  The next step is to figure out how we can log into the server now that its up.   We could go to the EC2 management portal and create a keypair.  Or perhaps we could try to do this pragmatically?

Create a Boto Configuration File

http://boto.readthedocs.org/en/latest/boto_config_tut.html

To start off with, we’d like to put our environment variables in a  config file so we don’t have to enter that into the script.  To do this we create a file:

This file will be referenced by the APIs so we don’t have to provide the information next time.  The format of the file is:

That looks ok. Let’s log in again and terminate the instance:

Create Key Pair for future instances

 

Create the New Instance with our Keypair

That last command gives us the IP address of the server. So let’s log into him now.

Login to new instance

If we try to ping, we realize that we can’t.  The problem is, we didn’t give it a security group.  The default security group has locked it down from the outside world.  What we need to do is enable some ports on it.  Primarily, we need to enable SSH, port 22.  We’ll do this from the console at this point and leave the API calls to you gentle reader to figure out.  Once that is done, we could do a simple

ssh  -i ~/Desktop/tester-keypair 54.186.218.36

But that’s kind of a pain.  What we could do instead is create a config file so we only have to run a simpler command.  To do this, we copy the keypair file into the ~/.ssh directory.

Then we create a file called ~/.ssh/config.  We then make it look like the following:

Now we can log right in!

Caveats

A few things I need to go back and check on:

  1. Creating the default Security group.  This shouldn’t be too hard.
  2. Making the pem file save.  In this article I think there is a bug using python3 and the keypair.save() command as I wasn’t able to write out the file.  I kept getting the exception: TypeError: ‘str’ does not support the buffer interface.