UCS Invicta iSCSI to VMware ESX

I have a 12TB UCS Invicta appliance in my lab I thought I’d try out.  The interface wasn’t as intuitive as I would have preferred, but the nice thing about it is that its simple and pretty easy to use once you get the feel of it.

Invicta Configuration

Create a LUN

Navigate to the LUN configuration and click Create LUN on the top.  I’m just going to do a 100GB LUN for fun.  I called my lun3

Screen Shot 2015-01-05 at 3.56.35 PM

 

Initiator Group Configuration

I already have an Initiator group I call esx.  This is all my ESX servers that share the LUNs.  When I first saw this interface I didn’t know what to do.  It turns out that some of the links you can right click on to get details.

Screen Shot 2015-01-05 at 4.00.53 PM

Here I say Add Initiator and will plug in my ESX initiator.  The problem is, I haven’t defined on on my ESX server yet.  So let’s do that then come back.

ESX iSCSI configuration

Clicking on the host inside the Configuration tab, we first click on Storage Adapters.  Under Add in the top right we can add a new Software iSCSI initiator.

Screen Shot 2015-01-05 at 4.05.32 PMNow that we have an iSCSI Adapter, we need to connect it to a physical interface.  Usually with iSCSI we have a separate vmkernel interface that we can use.

Screen Shot 2015-01-05 at 4.06.54 PM

Screen Shot 2015-01-05 at 4.07.10 PM

Screen Shot 2015-01-05 at 4.07.26 PMScreen Shot 2015-01-05 at 4.07.44 PMScreen Shot 2015-01-05 at 4.07.54 PM After you do this you typically create another one for iSCSI-B to give it network redundancy.  We’ll omit this here as we’re just showing the basic idea.

Now, go back to the Storage Adapters menu and we’ll attach this interface to our software initiator.

Click on the iSCSI initiator and select properties on the detail screen on the bottom.  On the network configuration tab, select the iSCSI adapter.

Screen Shot 2015-01-05 at 4.15.07 PM

In the dynamic discovery tab, we add the UCS Invicta:

Screen Shot 2015-01-06 at 8.24.27 AM

Closing this window, it will rescan the devices and you’ll be disappointed to see that the Invicta LUN we created will not be shown.  This is because we didn’t add the iSCSI LUN to our initiator group.

At this point as a quick sanity check, I usually ssh into the box at this point and make sure I can ping the UCS Invicta appliance.  If that doesn’t work then you’re not going to get much farther.

Finishing off the Connection

Now we go back to the Invicta Appliance and add our LUN into the initiator group.  Right click on the initiator group you have (or create a new one) and then select ‘Add Initiator’

Screen Shot 2015-01-05 at 4.19.38 PM

 

The next screen you fill in the Initiator that you see on the vCenter screen under Storage Adapters.  Mine was iqn.1998-01.com.vmware:esx04-1e63ab9d

Screen Shot 2015-01-05 at 4.20.10 PMAfter adding you should see it in the list of initiators.

Screen Shot 2015-01-05 at 4.20.36 PM

Right clicking on the Initiator group again will allow us to add the LUN to this group.

Screen Shot 2015-01-05 at 4.24.37 PM

On this screen we drag and drop the LUN from the bottom to the top.  This to me is why the interface isn’t that intuitive.  Sometimes you right click, sometimes you drag and drop.  Once you drop it in place you can pick the LUN ID.

Screen Shot 2015-01-05 at 4.26.47 PMGoing back to the vCenter console we can now rescan the interfaces.

Right click the iSCSI software adapter and do a rescan.  Your LUNs should be up now!

 

 

 

2014 year in review

2014 was huge for me.  I hope it was a great year for you too.  A few highlights:

  • In July, I achieved the CCIE Datacenter certification.  This came after 4 failed attempts on the written exam and one previous failure on the lab exam.  More information on that is here.
  • In November, I finally got a working OpenStack implementation based on Ubuntu 14.04 with the Juno release working in my lab.  I previously had tried to install OpenStack by hand several times and failed.  I’d been successful with RDO and packstack, but that doesn’t really count because that’s just scripting magic.  I had presented at a Utah summit on OpenStack and I am convinced of its viability in the datacenter.  I hope to have a lot more to do with OpenStack in 2015.
  • I finally got around to figuring out AWS.  I had played a little with it before, but I totally immersed myself in it.  I scripted, designed and even took a full week class on it.  I’m amazed by its simplicity and how far ahead it is in front of every near competitor.  Wow.  I also took a look at Digital Ocean and became pretty fluent in creating droplets, scripting, and automating all the things.
  • Docker was a huge wake up call to me to get back into this business.  I saw the benefits of Docker immediately and was hooked.  I started deploying on my MacBook and have since worked on migrating my apps to build on Docker.
  • Application Development was a huge goal of mine this year.  Several apps were updated, including UCS TechSpecs after a big redesign and improving application performance and disk space usage.  The one I’m most excited about is an app I’ve really been working on called Transparent Diet (for now).  This is like an Instagram of Food app that helps people make good decisions with what they eat.  The things I’ve learned by developing this app have been incredible:  Full functioning API, setting up a scalable backend on AWS with ELB, containers, database migration strategies, beta testers, business cases, etc.  Seriously my favorite pastime of 2014.

Predictions for 2015.  (Please note, these are my own opinions)

  • I did nothing to increase my knowledge on VMware.  I actually tried to stay away from it.  Its not that I don’t think it has a future, I think its actually a great company and is still easier to use than anything else.  Here’s the thing: Long term Apps will be SaaS based.  That’s the end game maybe 50 years from now.  We’re already seeing most of it how people can just buy apps as a service (Netflix, Salesforce, etc).  As those migrations are made, apps migrate towards distributed cattle models instead of the pets that VMware is so good at supporting.  As those Apps migrate there won’t be as much use case for the features VMware ESXi provides.  So 2015 will see Hyper-V catch up to ESXi in terms of adoption.  But its not all bad for VMware.  NSX will probably get more traction but so will ACI and so will the basic SDN provided by Neutron in the OpenStack project.  Back to the bad news: VCAC or vRealize will be renamed into another service that people don’t want. vCloud Air will also fail to gain any traction.  More good:  Horizon will gain more traction because 2015 is the year of the virtual desktop.
  • Dinosaur companies that sell hardware will finally wake up and understand how much AWS has disrupted their business.  They’re talking about it and in many meetings I’m in, people (not just my company but others) have no clue as to what AWS can do for a startup.  They know its cheap (in some cases), but they don’t know what’s compelling about it. (Think: application services like RDS, DynamoDB).  AWS is pushing hard to get into the Enterprise.  That’s where they want to get the real money.  But it will be more difficult for them.
  • Backup as a service goes more mainstream and more people start to use DRaaS.  Many already are, but this is the low hanging fruit and a cheap and easy one to offload.
  • Container wars get serious.  Docker and Rocket from CoreOS is the tip of the iceberg.  We’ll see more orchestration tools (Challengers to Kubernetes) and perhaps more packaging APIs.  Container networking solutions will become more mature and there will be a battle in that space as well.
  • Bitcoin doubles in value.  Today its sitting at $316.  In 2015 it will get back to $600

My Goals for 2015

  • I’ll release my application Transparent Diet to the world in March.  It will be free and I hope to get that out to at least 500 people.
  • I will be blogging more about the Transparent Diet architecture as I blog more on cloud services and how to architect applications on AWS.  I also will show how to do parts on another platform.  This other platform will be something like Digital Ocean, OpenStack, or some other public cloud provider.
  • I’ll be working with my kids to develop game applications.  I’d like to teach them how to write real code.  They’ve done code.org and some others, but its time to get serious.  We’re going to build several games with the swift programming language.
  • I hope to contribute more to open source projects.  I helped this past week on an Xcode library I’ve been using.  I’ll be filling my github account with more good things.
  • I look forward to architecting more private cloud solutions

What are your goals?  Predictions?  The nice thing about tech predictions is that none remembers if you were wrong or if you made any predictions at all.  Its a pretty safe thing to say I will most likely be wildly wrong.

Here’s to a great 2015!

Blocking IP addresses from your server

My friend Shadd gave me a list of URLs that I should try to block so that I could allow comments back on this blog.  Back in November, my site was down because I was getting spammed like crazy.  I’m not sure this is the best approach, because I don’t want to alienate half the world from my site.  But its worth a shot.  Also, with all this talk about North Korean hackers and stuff, we could all revisit our security settings to see how we’re doing.

These commands work on CentOS.

iptables

First, copied the list into a text file called bad_ips.  Then run this script:

The first grep in that command gets rid of lines with comments while the egrep gets rid of blank lines.

Then you can do

service iptables save

Looking in the /etc/sysconfig/iptables file you’ll see all those IP addresses are now blocked.

This isn’t the end all solution.  There’s no reason a spammer couldn’t spin up an AWS instance on sovereign Oregon soil and hit me even closer.  But this should be a good start.

Rails API Testing or “You’ve been doing it wrong”

For the last several years I’ve been developing Ruby on Rails applications.  Nothing fancy, and most of them just failed projects that didn’t go anywhere.  During all this time I’ve trolled documentation on testing the app, saying “That would be nice to do one day”, but I’ve finally had enough and I want to change my evil ways. I’m going to start writing test cases with my applications.

To start with, I’m testing my API.  I didn’t even know where to start.  I watched a Railscasts episode where the legendary Ryan Bates talks about how he tests.  There were some great comments in there about all these additional plugins.  It seemed pretty overwhelming to me at first and there was quite a learning curve.  So I wanted to write some of this down in case there was someone out there like me who was going to start down the path of correcting the error of there ways before 2015 hits.

Basic Rails Testing

The first thing to check was the guide.  Rails has great documentation!

I ran:

Just to see if I would get any output.  After all, Ruby generates this for you automatically right?  Low and behold I got a failure!

Well, first I was missing the test environment, so I put that in the config/database.yml.  Easy problem to solve!

Next, my welcome controller gave me errors:

Ok, I already like this.  Looks like I need to intiate my test database?

Turns out that Devise is the culprit.  A quick trip to StackOverflow solves the problem.  I fixed it like they said. Then I ran into another one with Devise that I added to my test/test_helper.rb file.   Its so nice to go over roads that other people have traveled!

Running my test:

And I finally got tests to work!  Yay!  It looks like I’m doing it right.

From here, I looked to test other API calls, but all the documentation said that I should probably start looking at rspec.  Apparently, that’s how the cool kids are doing it.  (Or were doing it at some point when they wrote how to do it).  So after running rake test, that was the last testing I did with the distributed rails testing.

RSpec

This is the latest hotness in testing that I could find in my research.  Pretty much everybody seems to be using it.  I edited my Gemfile and added rspec-rails.  I also finally started grouping things so I wouldn’t install these unnecessary gems on my production servers.  Spoiler alert:  My completed Gemfile looks like the below:

As you can see, I added a few more gems after rspec-rails, but I’ll get into those in a second.

After doing bundle install I ran:

Now to test we run

bundle exec rspec

Ok, no tests to do yet!  Now to get to work!

Factory Girl & FFaker and other setup

The next step was to put Factory Girl.  Once again, Ryan Bates explains why Factory Girl is preferred over Fixtures.  I went back and added that to my Gemfile along with ffaker because I saw some cool things in that gem.  (The one thing not cool about ffaker was the documentation, but the code was easy enough to read.

Next, I modified config/application as specified in this blog entry.

I also had to add these modules into the rest of the environment.  I changed the spec/rails_helper.rb to have the below.  Everything else stayed the same:

Then I added the directory:

mkdir spec/support

I added the file  spec/support/devise.rb

as well as the file  spec/support/factory_girl.rb

That has all my extra libraries used for my tests.

Lastly, I setup the test database

rake db:test:prepare

A basic Tests

Now to set up some tests.  I thought it best to start off simple with a static page:

rails g rspec:controller welcome

This is the root of the homepage.  Following the documentation, I added some simple tests for the welcome page:

I ran bundle exec rspec  and it worked.  (Though not at first, as I had to figure out how to configure everything like I set up above. )

Testing User Model

rails generate rspec:model user

Since I already have a user model.  The list of spec modules to add are listed here.

Since we have a model we are testing, we need to generate the fixture for it.  Here’s how I made it work with my Devise implementation:

spec/factories/user.rb

The part that was most important that stumped me for a while was not putting the { } around Faker::Internet.email.  Since my tests tests for unique emails, it kept failing.  Putting the {} around Faker::Internet.email made sure it was unique on each call.

There’s a lot of documentation on cleaning up the database by using the database_cleaner gem.  I’m not using it right now.  ffaker generates all kinds of new things for me, so I don’t worry about it.  I suppose that the database would need to be initialized though from time to time.

This could be accomplished with:

Next I added the user model test

spec/models/user_spec.rb

This uses the shoulda-matches gem quite heavily and seems to be a good start to testing my user model.  Unit test check!

 Testing the API Controller

Next, I wanted to check the API for when users authenticate.  The way my API works (and the way I assume most work this way) is that the user will send a username (or email) and password and from that the application will send back an API token.  This makes subsequent calls stateless.  So I’ll test my session login  controller:

rails g rspec:controller api/v1/sessions

 I was happy to see it created  spec/controllers/api/v1/sessions_controller_spec.rb  just like how I have my API!

Here’s the first version of my working sessions_controller_spec.rb file:

Running this test:

Wow!  I feel like a real hipster programmer now!  Testing!

More Tests

This is just the beginning.  I am now a convert and subscribe to the theory that you should spend about half of your time writing test cases.  It pays off in the long run.  I have more to go, but from now on with each line of code I write I’ll be writing test cases.  It seems that I still have a lot of catching up to do with the current system.

 

 

 

Boot2Docker with Cisco AnyConnect

Boot2Docker is an OS X app used to create a virtual environment for docker.  Docker only runs on Linux, so Boot2Docker installs a VM on your mac (using virtual box) and a client that runs locally to communicate with the VM.

I downloaded this and followed instructions.  You basically just install it with a few clicks.  Once installed, boot2docker will be in your application folder.  You click on it and it in the applications folder and you are ready to go.  It kicks off its own terminal window.  Since I use iTerm2, I just start it like so:

boot2docker up

This will give you a few environment variables to export:

This starts up a VM and Docker daemon that can be used to work with docker.

Once this was up, I ran: docker run hello-world . This gave me a friendly message that everything was up. So, following its suggestion, I ran docker run -it --rm ubuntu bash . This took a bit longer to finish as it had to download the ubuntu image.  Subsequent launches take less than a second.

There is another project called KiteMatic I dabbled with, but was happy enough with Boot2Docker that I didn’t bother pursuing  it.

Cisco AnyConnect VPN problem:

There is an issue with using boot2docker and Cisco AnyConnect VPN.  Basically its this:  You can’t run any docker commands because AnyConnect doesn’t allow any split tunneling.

What’s worse, is that after terminating a VPC session with AnyConnect (disconnecting), I have to reestablish a static route so that I can talk to boot2docker again:

To get around this the fix is to route your docker calls through your localhost.  That way, regardless of whether you are connected to the VPN or on an island somewhere (or both) you can still connect.

1. Start from scratch

boot2docker delete

2.  Create new boot2docker image

boot2docker init

3.  Edit VirtualBox and edit settings for NAT.

Screen Shot 2014-12-12 at 11.05.41 AM

Select ‘Port Forwarding’

4.  Add the Docker port forwarding.

Screen Shot 2014-12-12 at 11.08.04 AM

Click ok and exit VirtualBox.

5. Start up the Docker VM

 6.  Export localhost:

 7.  Drawbacks and Caveats

Now you have exposed Docker to the world.  For any service that you put on there, like when you launch docker -p 80:80, you’ll have to go into virtual box and map 80 to 80 so that it shows up.  Not the greatest solution, but at least it works!

Credits: boot2docker github tracker @jchauncey and @nickmarden. Thanks guys!!!

 

iOS icon size generator

One of the things you run into while developing iOS applications is that you need several different icon sizes for the various devices.  For an iOS 8 iPhone application, you need at least 4 different sizes: 58pt, 80pt, 120pt, and 180pt.  Not to mention the main icon size for the app store.  If you develop a Universal App for the iPad there’s even more!

I’m sure there are tons of things people use to do this but I thought I’d throw my own solution in the mix as well.

I use ImageMagick with a python script I wrote.  I like it because its quick and easy.  Plus, if you are developing in Ruby on Rails and need something like CarrierWave to upload images, you’ll already have ImageMagick installed.

Here’s how it works:

1.  Install ImageMagick

brew install imagemagick

2.  Use my python script

 Run the Python Script

With your original sized icon in the same directory as this script run the script.  Here’s my output example.  The file I put in the directory is called icon2@1024.png.

As you can see, it generated all the image sizes you would need for your asset images.

 

 

Cisco, Cluster on Die, and the E5-2600v3

I was in a meeting where someone noted that VMware does not support the Cluster on Die feature of the new Intel Haswell E5-2600v3 processors.  The question came: “What performance degradation do we get by not having this supported?  And would it be better to instead use the M3 servers”

This information they got from Cisco’s whitepaper on recommended BIOS settings for the new B200 M4, C240 M4, and C220 M4 servers.

The snarky answer is: You’re asking the wrong question.  The nice answer is: “M4 will give better performance than M3″.

Let’s understand a few things.

1.  Cluster on Die (CoD) is a new feature.  It wasn’t in the previous v2 processors.  So assuming all things equal, running the v3 without it just means that you don’t get whatever increased ‘goodness’ it will provide.

2.  Understand what CoD does.   This article does a good job pointing out what it does.  Look at it this way:  As each socket gets more and more cores in each iteration of Intel’s new chips, you have a lot of cores looking into the same memory bank.  It starts making it so latency goes up to find the correct data.  CoD carves up the regions so data to the cores stay somewhat more coherent.  Almost like giving each core its own bank of cache and memory.  This helps the latency go down as more cores are added.

To sum it up:  If I have a C240 M3 with 12 cores per processor and I compare it to a C240 M4 with 12 cores per processor with CoD disabled, then I still have the same memory contention problem with the M4 cores that I did with the M3s.  When VMware eventually supports CoD, then you just get a speed enhancement.

 

Unauthorized UCS Invicta usage

I had a UCS Invicta that I needed to change from a scaling appliance (where more than one invicta node is managed by several others) to a stand alone appliance.  I thought this could be done in the field but at this point I don’t think its an option still.  The below info is just some things I tried in a lab and I’m pretty sure are not supported by Cisco.

While the UCS Invicta appliance has had its share of negative press and setbacks, I still have pretty high hopes for this flash array.  Primarily because the operating system is so well designed for solid state media.  Its different, full featured, and I think offers a lot of value for anyone in need of fast storage. (everyone?)

First step was to take out the Infiniband card and put in the fibre channel cards.  I put it in the exact spot the IB card was in.

putting the FC card in the Invicta

CIMC

I had to change IP address.  Booted up.  Pressed F8, changed the IP address to 10.1.1.36/24 and didn’t touch the password.

The default CIMC password to login was admin / d3m0n0w! (source)

Console

It takes forever to boot up because there is no InfiniBand card in the appliance anymore.  I took it out and replaced with a fibre channel HBA that came in one of the SSRs that managed the nodes.

The default console user/password is  console / cisco1 (source)

This didn’t work for me I thought because whoever had the system before me changed it.  Fortunately for me, this is Linux.  Rebooted the system and added ‘single’ to the grub menu.   Yes, I securely compromised the system.  Such is the power of the Linux user.

IMG_6674

Once on the command line I entered the passwd command to change the root password into something I could log in with.  I took a look at the /etc/passwd table and noticed that there was no console user.  Hmm, guess that’s why I couldn’t log in.

I then rebooted.

Reformatting

There were three commands that looked pretty promising.  After digging around I found:

/usr/local/tools/invicta/factoryreset/

/usr/local/tools/invicta/menu  and the accompanying   menu.sh  command.

and

/usr/local/tools/invicta/support/secureWipe.sh 

I tried all these and then rebooted the system.  Nothing seemed to indicate any progress as to turning this node into a stand alone appliance.  I have a few emails out and if I figure it out, I’ll post an update. 

SSH tunnel for Adium Accounts

Sometimes you’ll be at a place where your IRC client can’t connect to something like irc.freenode.net.  This is where ‘behind corporate firewalls’ turns into ‘behind enemy lines’.  IRC is invaluable because there are people who can help you with your problems… but they are on the outside.

Here is how I solve that for a Mac OS running Adium as my IRC client.

First, you need a place on the internet you can ssh into.  If you have an AWS machine, that can work.  You run the following command:

ssh -D 127.0.0.1:3128 foo@some-server.com

This opens up the proxy connection.  Next on Adium, when you look at the accounts, you can input the following:

Adium proxy settings for SSH

Doing this allows me to see things on the #docker, #xcat IRC channels.

Ansible: From OSX to AWS

My goal in this post is to go from 0 to Ansible installed on my Mac and then be able to provision AWS instances ready to run Docker containers.  The code for this post is public on my github account.

OSX Setup

I am running OS X Yosemite.  I use brew to make things easy.  Install homebrew first.  This makes it easy to install Ansible:

brew install ansible

I have one machine up at AWS right now.  So let’s test talking to it.  First, we create the hosts file:

Now we put in our host:

instance1

I can do it this way because I have a file ~/.ssh/config  that looks like the following:

Now we can test:

Where to go next? I downloaded the free PDF by @lhochstein that has 3 chapters that goes over the nuts and bolts of Ansible so I was ready to bake an image.  But first let’s look at how Ansible is installed on OS X:

The default hosts file is, as we already saw, in /usr/local/etc/ansible/hosts .  We also have a config file we can create in ~/.ansible.cfg .  More on that later.

The other thing we have is the default modules that shipped with Ansible.  These are located in /usr/local/Cellar/ansible/1.7.2/share/ansible/  (if you’re running my same version)

If you look in this directory and subdirectories you’ll all the modules that Ansible comes with.  I think all of these modules have documentation in the code, but the easiest way to read the documentation is to run

ansible-doc <module-name>

Since we need to provision instances then we can look at the ec2 module:

This gives us a lot of information on modules you can use to deploy ec2 instances.

An nginx Playbook

Let’s take a step back and do something simple like deploy nginx on our instance using an Ansible Playbook.

I create an ansible file called ~/Code/ansible/nginx.yml .  The contents are the following:

I then created the file  ~/Code/ansible/files/nginx.conf

Finally, I created the ~/Code/ansible/files/index.html

With this I run the command:

ansible-playbook nginx.yml
If you are lucky, you have cowsay installed.  If so, then you get the cow telling you what’s happening.  If not, then you can install it:
brew install cowsay
Now, navigate to the IP address of the instance, and magic!  You have a web server configured by Ansible.  You can already see how useful this is!  Now, configuring a web server on an AWS instance is not todays hotness.  The real hotness is creating a docker container that runs a web server.  So we can just tell Ansible to install docker.  From there, we would just install our docker containers and run them.

A Docker Playbook

In one of my previous blog entries, I showed the steps I took to get docker running on an Ubuntu image.  Let’s take those steps and put them in an Ansible playbook:

Here we use some of the built in modules from Ansible that deal with package management.   You can see the descriptions and what’s available by reading Ansible’s documentation.

We run this on our host:

ansible-playbook -vvv docker.yml 

And now we can ssh into the host and launch a container:

sudo docker run --rm -i -t ubuntu /bin/bash

This to me is the ultimate way to automate our infrastructure:  We use Ansible to create our instances.  We use Ansible to set up the environment for docker, then we use Ansible to deploy our containers.

All the work for our specific application settings is done with the Dockerfile.

Provisioning AWS Machines

Up until now, all of our host information has been done with one host: instance1 that we configured in our hosts file.  Ansible is much more powerful than that.   We’re going to modify our ~/.ansible.cfg  file to point to a different place for hosts:

This uses my AWS keypair for logging into the remote servers I’m going to create.  I now need to create the inventory directory:

mkdir ~/Code/Ansible/inventory

Inside this directory I’m going to put a script: ec2.py.  This script comes with Ansible but the one that came with my distribution didn’t work.


The ec2.py file also expects an accompanying ec2.ini file:

You can modify this to suit your environment.  I’m also assuming you have boto installed already and a ~/.boto file.  If not, see how I created mine here.

Let’s see if we can now talk to our hosts:

ansible all -a date
Hopefully you got something back that looked like a date and not an error.   The nodes returned from this list will all be in the ec2 group.  I think there is a way to use tags to further make them distinct, but I haven’t had a chance to do that yet.
We now need to lay our directory structure out for something a little bigger.  The best practices for this is listed here.  My project is a little more simple as I only have my ec2 hosts and I’m just playing with them.  This stuff can get serious.  You can explore how I lay out my directories and files by viewing my github repository.
The most interesting file of the new layout is my ~/Code/Ansible/roles/ec2/tasks/main.yml file.  This file looks like the below:

I use a variable file that has these variables in the {{ something }} defined.  Again, check out my  github repo.   This file provisions a machine (similar to the configuration from my python post I did) and then waits for SSH to come up.

In my root directory I have a file called site.yml that tells the instances to come up and then go configure the instances.  Can you see how magic this is?

we run:

ansible-playbook site.yml

This makes Ansible go and deploy one ec2 host.  It waits for it to become available, and then it ssh’s into the instance and sets up docker.  Our next step would be to create a few docker playbooks to run our applications.  Then we can completely create our production environment.

One step closer to automating all the things!

If you found errors, corrections, or just enjoyed the article, I’d love to hear from you: @vallard.