Category Archives: Linux

Boot2Docker with Cisco AnyConnect

Boot2Docker is an OS X app used to create a virtual environment for docker.  Docker only runs on Linux, so Boot2Docker installs a VM on your mac (using virtual box) and a client that runs locally to communicate with the VM.

I downloaded this and followed instructions.  You basically just install it with a few clicks.  Once installed, boot2docker will be in your application folder.  You click on it and it in the applications folder and you are ready to go.  It kicks off its own terminal window.  Since I use iTerm2, I just start it like so:

boot2docker up

This will give you a few environment variables to export:

This starts up a VM and Docker daemon that can be used to work with docker.

Once this was up, I ran: docker run hello-world . This gave me a friendly message that everything was up. So, following its suggestion, I ran docker run -it --rm ubuntu bash . This took a bit longer to finish as it had to download the ubuntu image.  Subsequent launches take less than a second.

There is another project called KiteMatic I dabbled with, but was happy enough with Boot2Docker that I didn’t bother pursuing  it.

Cisco AnyConnect VPN problem:

There is an issue with using boot2docker and Cisco AnyConnect VPN.  Basically its this:  You can’t run any docker commands because AnyConnect doesn’t allow any split tunneling.

What’s worse, is that after terminating a VPC session with AnyConnect (disconnecting), I have to reestablish a static route so that I can talk to boot2docker again:

To get around this the fix is to route your docker calls through your localhost.  That way, regardless of whether you are connected to the VPN or on an island somewhere (or both) you can still connect.

1. Start from scratch

boot2docker delete

2.  Create new boot2docker image

boot2docker init

3.  Edit VirtualBox and edit settings for NAT.

Screen Shot 2014-12-12 at 11.05.41 AM

Select ‘Port Forwarding’

4.  Add the Docker port forwarding.

Screen Shot 2014-12-12 at 11.08.04 AM

Click ok and exit VirtualBox.

5. Start up the Docker VM

 6.  Export localhost:

 7.  Drawbacks and Caveats

Now you have exposed Docker to the world.  For any service that you put on there, like when you launch docker -p 80:80, you’ll have to go into virtual box and map 80 to 80 so that it shows up.  Not the greatest solution, but at least it works!

Credits: boot2docker github tracker @jchauncey and @nickmarden. Thanks guys!!!

 

Ansible: From OSX to AWS

My goal in this post is to go from 0 to Ansible installed on my Mac and then be able to provision AWS instances ready to run Docker containers.  The code for this post is public on my github account.

OSX Setup

I am running OS X Yosemite.  I use brew to make things easy.  Install homebrew first.  This makes it easy to install Ansible:

brew install ansible

I have one machine up at AWS right now.  So let’s test talking to it.  First, we create the hosts file:

Now we put in our host:

instance1

I can do it this way because I have a file ~/.ssh/config  that looks like the following:

Now we can test:

Where to go next? I downloaded the free PDF by @lhochstein that has 3 chapters that goes over the nuts and bolts of Ansible so I was ready to bake an image.  But first let’s look at how Ansible is installed on OS X:

The default hosts file is, as we already saw, in /usr/local/etc/ansible/hosts .  We also have a config file we can create in ~/.ansible.cfg .  More on that later.

The other thing we have is the default modules that shipped with Ansible.  These are located in /usr/local/Cellar/ansible/1.7.2/share/ansible/  (if you’re running my same version)

If you look in this directory and subdirectories you’ll all the modules that Ansible comes with.  I think all of these modules have documentation in the code, but the easiest way to read the documentation is to run

ansible-doc <module-name>

Since we need to provision instances then we can look at the ec2 module:

This gives us a lot of information on modules you can use to deploy ec2 instances.

An nginx Playbook

Let’s take a step back and do something simple like deploy nginx on our instance using an Ansible Playbook.

I create an ansible file called ~/Code/ansible/nginx.yml .  The contents are the following:

I then created the file  ~/Code/ansible/files/nginx.conf

Finally, I created the ~/Code/ansible/files/index.html

With this I run the command:

ansible-playbook nginx.yml
If you are lucky, you have cowsay installed.  If so, then you get the cow telling you what’s happening.  If not, then you can install it:
brew install cowsay
Now, navigate to the IP address of the instance, and magic!  You have a web server configured by Ansible.  You can already see how useful this is!  Now, configuring a web server on an AWS instance is not todays hotness.  The real hotness is creating a docker container that runs a web server.  So we can just tell Ansible to install docker.  From there, we would just install our docker containers and run them.

A Docker Playbook

In one of my previous blog entries, I showed the steps I took to get docker running on an Ubuntu image.  Let’s take those steps and put them in an Ansible playbook:

Here we use some of the built in modules from Ansible that deal with package management.   You can see the descriptions and what’s available by reading Ansible’s documentation.

We run this on our host:

ansible-playbook -vvv docker.yml 

And now we can ssh into the host and launch a container:

sudo docker run --rm -i -t ubuntu /bin/bash

This to me is the ultimate way to automate our infrastructure:  We use Ansible to create our instances.  We use Ansible to set up the environment for docker, then we use Ansible to deploy our containers.

All the work for our specific application settings is done with the Dockerfile.

Provisioning AWS Machines

Up until now, all of our host information has been done with one host: instance1 that we configured in our hosts file.  Ansible is much more powerful than that.   We’re going to modify our ~/.ansible.cfg  file to point to a different place for hosts:

This uses my AWS keypair for logging into the remote servers I’m going to create.  I now need to create the inventory directory:

mkdir ~/Code/Ansible/inventory

Inside this directory I’m going to put a script: ec2.py.  This script comes with Ansible but the one that came with my distribution didn’t work.


The ec2.py file also expects an accompanying ec2.ini file:

You can modify this to suit your environment.  I’m also assuming you have boto installed already and a ~/.boto file.  If not, see how I created mine here.

Let’s see if we can now talk to our hosts:

ansible all -a date
Hopefully you got something back that looked like a date and not an error.   The nodes returned from this list will all be in the ec2 group.  I think there is a way to use tags to further make them distinct, but I haven’t had a chance to do that yet.
We now need to lay our directory structure out for something a little bigger.  The best practices for this is listed here.  My project is a little more simple as I only have my ec2 hosts and I’m just playing with them.  This stuff can get serious.  You can explore how I lay out my directories and files by viewing my github repository.
The most interesting file of the new layout is my ~/Code/Ansible/roles/ec2/tasks/main.yml file.  This file looks like the below:

I use a variable file that has these variables in the {{ something }} defined.  Again, check out my  github repo.   This file provisions a machine (similar to the configuration from my python post I did) and then waits for SSH to come up.

In my root directory I have a file called site.yml that tells the instances to come up and then go configure the instances.  Can you see how magic this is?

we run:

ansible-playbook site.yml

This makes Ansible go and deploy one ec2 host.  It waits for it to become available, and then it ssh’s into the instance and sets up docker.  Our next step would be to create a few docker playbooks to run our applications.  Then we can completely create our production environment.

One step closer to automating all the things!

If you found errors, corrections, or just enjoyed the article, I’d love to hear from you: @vallard.

 

Why Docker Changes everything

There are many shifts we talk about in IT.  Here are 2 recent examples come to mind that most people are familiar with:

  • The shift from physical to virtual
  • The shift from on-prem infrastructure to cloud based consumption IT.

Ultimately it is an organization with a clear vision that disrupts the status quo and creates a more efficient way of doing things.  Those 2 examples were the result of the brilliant people who started VMware and Amazon Web Services (AWS)

VMware was incredible and made it so easy.  How many vMotion demos did you see?  They nailed it and their execution was outstanding.  And so vCenter with ESX still stands as their greatest achievement.  But unfortunately, nothing they’ve done since has really mattered.  Certainly they are pounding the drum on NSX, but the implications and the payoffs are nothing like an organization could get from when they went from p2v (physical to virtual).   VCAC or vRealize, or whatever they decided to rename it this quarter is not catching on.  And vCheese, or vCHS, or vCloud Air (cause ‘Air’ worked for Apple) isn’t all that either.  I don’t have any customers using it.  I will admit, I’m pretty excited about VSAN, but until the get deduplication, its just not that hot.  And solutions from providers like SimpliVity have more capabilities.   But there can be no doubt that vSphere/ESX is their greatest success.

AWS disrupted everybody.  They basically killed the poor hardware salesman that worked in the SMB space.   AWS remains far in the leadership quadrant.

My customers, even the ones I could argue that are not on the forefront of technology, are migrating to AWS and Azure.  Lately, I’ve been reading more and more about Digital Ocean.  I remember at IBM back in 2003 hearing about Utility Computing.  We even opened an OnDemand Center, where customers could rent compute cycles.  If we had the clear vision and leadership that AWS had, we could have made it huge at that time.  (Did I just imply that IBM leadership missed a huge opportunity?  I sure did.)

Well today we have another company that has emerged on the forefront of technology that is showing all others the way and will soon cause massive disruption.  That company is Docker.  Docker will soon be found on every public and private cloud.  And the implications are huge.

What are Containers?

I am so excited about Docker I can barely contain myself.  Get it?  Contain.. Docker is a container?  No?  Ok, let me explain Docker.  I was working for IBM around 2005 and we were visiting UCS talking about xCAT and supercomputers and the admins there started telling us about Linux containers.  or LXCs.  It was based on the idea of a Union File System.  Basically, you could overlay files on top of each other in layers.  So let’s say your base operating system had a file called /tmp/iamafile with the contents “Hi, I’m a file”.  You could create a container (which I have heard explained as a chroot environment on steroids, cause its basically mapped over the same root).  In this container, you could open the file /tmp/iamafile and change the contents to “I am also a file modified”.  Now that file will get a Copy-on-Write.  Meaning, only the container will see the change.  It will also save the change.  But the basic underlying file on the root operating system sees no change.  Its still the same file that says “Hi, I’m a file”.  Only the instance in the container has changed.

Its not just files that can be contained in the container.  Its also processes.  So you can run a process in the container and run it in its own environment.

That technology, while super cool, seemed to be relegated to the cute-things-really-geeky-people-do category.  I dismissed it and never thought about it again until I saw Docker was using it.

Why Docker made Containers Cool

Here are the things Docker gave that made it so cool, and why it will disrupt many businesses:

1.  They created a build process to create containers.

How do most people manage VMs?  Well, they make this golden image of a server.  They put all the right things in it.  Some more advanced people will script it, but more often than not, I just see some blessed black box.

Our blessed black box

We don’t know how this image came to be, or what’s in it.  But it’s blessed.  It has all our security patches, configuration files, etc.  Hopefully that guy who knows how to build it all doesn’t quit.

This is not reproducible.  The cool kids then say: “We’ll use Puppet because it can script everything for us”  Then they talk about how they know this cool puppet file language that no one else knows and feel like puppet masters.  So they build configuration management around the image.

With Docker this is not necessary.  I just create a Dockerfile that has all the things in it a puppet script or chef recipe had in it and I’m done.  But not only that, I can see how the image was created.  I have a blueprint that syntactically is super easy.  There are only 12 syntax lines.  Everything else is just how you would build the code.

We also don’t try to abstract things like Puppet and Chef do.  We just say we want the container to install nginx and it does by doing:

RUN apt-get -y install nginx

That’s it.  Super easy to tell what’s going on.  Then you just build your image.

2. Containers are Modular

The build process with Docker is incredible.  When I first looked at it, I thought: Great, my Dockerfile is going to look like a giant monolithic kickstart postscript that I used to make when installing Red Hat servers.

But that’s not how it works.  Containers build off of each other.  For my application I’m working on now, I have an NGINX container, a ruby container, an app server container, and my application container sitting on that.  Its almost like having modules.  The modules just sit on top of each other.  Turtles all the way down.

This way I can mix and match different containers.  I might want to reuse my NGINX container with my python app.

3.  Containers are in the Cloud: Docker Hub

What github did for code, docker has done for system management.  I can browse how other people build nginx servers.  I can see what they do when I look at Dockerhub.  I can even use their images.  But while that’s cool and all, the most important aspect?  I can download those containers and run them anywhere.

You have an AWS account?  You can just grab a docker image and run it there.  Want to run it on Digital Ocean?  OpenStack? VMware?  Just download the docker image.  It’s almost like we put your VM templates in the cloud and can pull them anywhere.

What this gives us is app portability.  Amazing app portability.  All I need is a generic Ubuntu 14.04 server and I can get my application running on it faster than anyway I’ve been able to do before.

How many different kind of instances would I need in AWS?  How many templates in my vCenter cluster?  Really, just one:  One that is ready for docker to run.

Who Gets Disrupted?

So now that we have extreme portability we start to see how this new model can disrupt all kinds of “value added” technologies that other companies have tried to fill the void on.

Configuration tools – Puppet, Chef, I don’t need to learn you anymore.  I don’t need your agents asking to update and checking in anymore.  Sure, there are some corner cases, but for the most part, I’ll probably just stick with a push method tool like Ansible that can configure just a few commands to get my server ready for Docker.  The rest of the configuration is done in the Dockerfile.

Deployment tools  – I used Capistrano for deploying apps on a server.  I don’t need to deploy you any more.  Docker images do that for me.

VMware – Look at this blog post from VMware and then come back and tell me why Docker needs VMware?  Chris Wolf tries to explain how the vCloud Suite can extend the values of containers.  None of those seem to be much value to me.  Just as VMware has tried to commoditize the infrastructure, Docker is now commoditizing the virtual infrastructure.

AWS – Basically any cloud provider is going to get commoditized.  The only way AWS or Google App Engine or Azure can get you to stick around is by offering low prices OR get you hooked on their APIs and services.  So if you are using DynamoDB, you can’t get that anywhere but AWS, so you are hooked.  But if I write my container that has that capability, I can move it to any cloud I want.  This means its up to the cloud providers to innovate.  It will be interesting to hear what more AWS says about containers at Re:Invent next week.

Who Can Win?

Docker is already going to win.  But more than Docker, I  think Cisco has potential here.  Cisco wants to create the Intercloud.  What a better way to transport workloads through the inter cloud than with docker containers.  Here is where Cisco needs to execute:

1.  They have to figure out networking with Docker.   Check out what Cisco has done with the 1000v on KVM.  It works now, today with containers.  But there’s more that needs to be done.  A more central user friendly way perhaps?  A GUI?

2.  They have to integrate it into Intercloud Director and somehow get the right hooks to make it even easier.   Inter cloud Director is off to a pretty good start.  It actually can transport VMs from your private data center to AWS, Azure, and DimentionData clouds, but it has a ways more to go.  If we had better visibility into the network utilization of our containers and VMs both onprem and offprem, we would really have a great solution.

What about Windows?

Fine, you say.  All good, but our applications run on Microsoft Servers.  So this won’t work for us.  Well, you probably missed this announcement.  So yeah, coming soon, to a server near you.

Conclusion

Its 2014.  Docker is about a year old.  I think we’re going to hear lots more about it.  Want another reason why its great?  Its open source! So start playing with Docker now.  You’ll be glad you got ahead of the curve.

 

Docker

I’m finally jumping in on the Docker bandwagon and it is pretty exciting.  Here’s how I did a quick trial of it to make it work.

Install OS

I could do this in my AWS account, or I can do it with my local private cloud.  So many options these days.  I installed the latest Ubuntu 14.04 Trusty server on my local cloud.  It probably would have been just as easy to spin it up on AWS.

I had to get my proxy set up correctly before I could get out to the Internet.  This was done by editing /etc/apt/apt.conf.  I just added the one line:

Configure for Docker

I followed the docker documentation on this.  Everything was pretty flawless.  I ran:

That last command had problems. The error I got said:

This is because I need to put a proxy on my docker configuration. A quick google, search lead me to do:

I added my proxy host

Rerunning

And I see a ton of stuff get downloaded. Looks like it works!

Now I have docker on my VM.  But what can we do with it?  Containers are used for applications. Creating a python application would probably be a good start. I searched around and found a good one on Digital Ocean’s site.

I like how it showed how to detach: CTRL-P and CTRL-Q. To reattach:

Get the image ID, then reattach

Docker is one of many interesting projects I’ve been looking at lately. For my own projects, I’ll be using Docker more for easing deployment. If you saw from my article yesterday, I’ve been working on public clouds as well as internal clouds. Connecting those clouds together and letting applications migrate between them is where I’ll be spending a few more hours on this week and next.

IP Masquerading (NAT in Red Hat)

In my lab I have a server that is dual homed.  It is connected to the outside network on one interface (br0) and the internal network (br1) is connected to the rest of my VM cluster.

I want the VMs to be able to get outside.  So the way I did that (on RedHat) was to create a few IP table rules.  I’ve been doing this for 10+ years now, but keep forgetting syntax.

So here it is:

Then, of course, you have do enable forwarding in the /etc/sysctrl.conf

Finally, run

for those changes to take effect.

Installing Cisco DCNM on Red Hat Linux

DCNM is Cisco’s GUI for managing MDS and Nexus products.  It’s pretty great for getting a visual of how things are configured and performing.

I thought I would go into a little more detail than I’ve seen posted online about installing DCNM on RedHat Linux.  In this example we’ll be installing two servers.  One server will be our app server and the other one will be our Postgres database server.  You can do it all in just one server, but where is the fun in that?

1. Download binaries

From Cisco’s homepage, click support.  In the ‘Downloads’ section start typing in ‘Data Center Network’.  (DCNM showed no results when I tried it) You’ll see the first entry is Cisco Prime DCNM as shown below.

c51541d95b8f5ce13af56c04f500248c

We will be using DCNM 6.3.2 since its the latest and works great.  We need to download 2 files.

a5d7fb529e317d463fc85375727c5121

f7cd12ccc5167c06be64ed2580cbf085

The installer is really all you need, but its kind of nice to use the silent installer to script the installation process.

2.  Initial VM installation

Using the release notes as our guide as well as other installation instructions we will be creating two VMs with the following characteristics:

Processors 2 x 2GHz cores
Memory 8GB (8096MB)
Storage 64-100GB

 

For this installation, we’re just doing this as a test, so you may need more space.  Also, notice that in the release notes it states that when doing LAN and SAN monitoring with DCNM you need to use an Oracle Database.  A Postgres Database is supported on just SAN for up to 2000 ports or just LAN for up to 1000 ports.

Create these VMs.  I’m using KVM but you can use vSphere or Hyper-V.

3.  Operating System Installation 

The installation guides show that RHEL 5.4/5.5/5.6/5.7/6.4 (32-bit and 64-bit) are supported.  I’m using RHEL 6.5 x86_64.  It comes by default with PostgreSQL 8.4.  So I might be living on the edge a little bit, but I had 0 problems with the OS.

I installed two machines:

dcnm-app 10.1.1.17
dcnm-db 10.1.1.18

During the installation, I changed 2 things, but other than setting up the network I accepted the defaults with nearly everything.

3.1 dcnm-app

I set up as a Desktop as shown below.

0b4a589c1a2dc22e91835e291f2e0dc7

 

3.2 dcnm-db

Set up as a Database server as shown below

8aaad864f638c1760b7b8f3b220f3eb4

4. Operating System Configuration

There are several quick things to do to get this up and running.  You probably have OS hardening procedures at your organization, but this is howI did it to get up and running.   Do the following on both servers.

4.1 Disable SELinux

Does anybody besides Federal agencies use this?  Edit /etc/sysconfig/selinux.

Change the line to be:

This then requires a reboot.

4.2 Disable iptables

Yeah, I’m just closing the firewall.  There are some ports pointed out in the installation guide you can use to create custom firewalls, but I’m just leaving things wide open.

4.3 Enable YUM

If you set your server up with the RedHat network then you are ready to go.  I’m just going to keep it local bro!  I do this by mounting an expanded RedHat installation media  via NFS.  Here’s how I do it:

If you are cool then you can put it in /etc/fstab so it persists.

I then created the file /etc/yum.repos.d/local.repo.  I edited it to look like the below:

4.4 Install additional RPMs as needed

One that you will need on dcnm-app is glibc.i686

5. Database Installation on dcnm-db

This step is only needed on dcnm-db.  Using the info from the database installation guide we are using Postgres.  If you followed above like I did then you should just be able to see all the postgres RPMs installed.

If not, then you can install them all with

Next, start up the data base:

With the default installation of Postgres on RedHat, a user named postgres is created who pretty much does everything. We use him to configure the database.

5.1 Postgres Config

Postgres on RHEL6.5 doesn’t accept network connections by default.  That makes it more secure.  To enable our App server to connect to it, we need to change two files.

/var/lib/pgsql/data/postgresql.conf

Modify this file by adding the IP address for it to listen on.  By default its set to only listen for connections on ‘localhost’.
Change this line:

To look like this:

Or you can just make it ‘*’ (that says: listen on every interface). In my case this works because my Database servers IP address is 10.1.1.18, so I’m listening on eth0 and the local interface.

/var/lib/pgsql/data/pg_hba.conf

Modify this file by adding in a line for our DCNM user.  At the bottom of the file I added this line:

Once those two files are changed, restart postgres.

Now you should be ready to rock the Database server. We’ll check it in a minute. Now lets go over to the app server.

6.  Configure the App Server

You need to login via either VNC or on the console for XWindows.  VNC is probably the easiest way to see it remote.

Start the VNC server and then you can VNC into it.

You’ll then need to copy the dcmn installer that you downloaded from Cisco in step 1 as well as the properties file that you downloaded.  I put mine in the /tmp directory.  Change this to be an executable by running:

6.1 Modify the installer.properties

The dcnm-silent-installer-properties file is a zip file.  When expanded it has a directory called Postgres+Linux.  In this directory is the file we will use for our installation.  For the most part, I left it alone.  I just changed a few of the entries:

 

With that, we are ready to run!

7. Install DCNM

On the App server, we finally run:

If all goes well, you should be able to open a browser to dcnm-app and see the Cisco login screen.

Hurray!

Configure VMware from scratch without Windows

One of the things that bugs me about vCenter (still) is that it is still very tied to the Windows operating system.  You have to have Windows to set it up and trying to go about without Windows is still somewhat difficult.  In my lab I’m trying to get away from doing Windows.  I have xCAT installed to PXEboot UCS Blades to do what I want.  Its great, and its automated.  But when I installed 8 nodes to be ESXi hosts I quickly realized I needed vCenter to demonstrate this and use this as others would.

That requires vCenter.  VMware has had the vCenter appliance out for a few years now.  It runs on SLES and comes preconfigured.  The only problem is installing it when you have no vCenter client because today those clients are only made for the Windows Operating system.  How to get around this?

ovftool was the thing I found that did the job for me.  I found the link by reading the ever prolific Virtual Ghetto post on deploying ovftool on ESXi.  Since I had Linux, installing ovftool on the ESXi host wasn’t necessary for me.  Instead I just installed it on my Linux server (with some trouble since it deploys this stub and you have to make sure you don’t modify the file).

I ran the command:

ovftool -ds=NFS1 VMware-vCenter-Server-Appliance-5.0.5201-1476389_OVF10.ova vi://root:password@node01

After that, I watched my DHCP server and saw that it gave the vCenter appliance the IP address of 172.20.200.1.  Hopefully you have DHCP or you might be hosed.

Then after finding the docs, I intuitively opened my web browser to https://172.20.200.1:5480. (everyone knows that port number right?) I then logged in with user ‘root’ and password ‘vmware’ and started the auto setup.  After changing the IP address and restarting the appliance I was pretty golden.

Once configured, log into the appliance at https://172.20.1.101:9443/vsphere-client/ and then be stoked that you have flash player already installed and that it works.  Oh you didn’t have flash player installed on your linux server?  That sucks, I didn’t either.  Guess that’s another hoop we have to jump through. But wait, then you find that Flash 11.2.0 is the last Flash that has been released for Linux.  Guess what?  VMware requires Flash version 11.5.  Nice.

https://communities.vmware.com/message/2319263

At this point I just copied a Windows VM that I had laying around and started managing it from there.  The moral of the story is that you can’t do a Windows free VMware environment.  Sure, I could have done fancy scripting and managed it all remotely with some of their tools, but if I’m going to be doing all that, why should I pay for VMware?  I’d be better off just doing straight native KVM.  YMMV.

FCoE with UCS C-Series

I have in my lab a C210 that I want to turn into an FCoE target storage.  I’ll write more on that in another post.  The first challenge was to get it up with FCoE.  Its attached to a pair of Nexus 5548s.  I installed RedHat Linux 6.5 on the C210 and booted up.  The big issue I had was that even though RedHat Linux 6.5 comes with the fnic and enic drivers, the FCoE never happened.  It wasn’t until I installed the updated drivers from Cisco that I finally saw a flogi.  But there were other tricks that you had to do to make the C210 actually work with FCoE.

C210 CIMC

The first part to start is looking in the CIMC (with the machine powered on) and configure the vHBAs. From the GUI go to:

Server -> Inventory

Then on the work pane, the ‘Network Adapters’ tab, then down below select vHBAs.  Here you will see two vHBAs by default.  From here you have to set the VLAN that the vHBA will go over.  Clicking the ‘Properties’ on the interface you have to select the VLAN.  I set the MAC address to ‘AUTO’ based on a TAC case I looked at, but this never persisted.  From there I entered the VLAN.  VLAN 10 for the first interface and VLAN 20 for the second interface.  This VLAN 10 matches the FCoE VLAN and VSAN that I created on the Nexus 5548.  On the other Nexus I creed VLAN 20 to match FCoE VLAN 20 and VSAN 20.

This then seemed to require a reboot of the Linux Server for the VLANs to take effect.  In hindsight this is something I probably should have done first.

RedHat Linux 6.5

This needs to have the Cisco drivers for the fnic.  You might want to install the enic drivers as well.  I got these from cisco.com.  I used the B series drivers and it was a 1.2GB file that I had to download all to get a 656KB driver package.  I installed the kmod-fnic-1.6.0.6-1 RPM.  I had a customer who had updated to a later kernel and he had to install the kernel-devel rpm and recompile the driver.  After it came up, it worked for him.

With the C210 I wanted to bond the 10Gb NICs into a vPC.  So I did an LACP bond with Linux.  This was done as follows:

Created file: /etc/modprobe.d/bond.conf

alias bond0 bonding
options bonding mode=4 miimon=100 lacp_rate=1

Created file: /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=172.20.1.1
ONBOOT=yes
NETMASK=255.255.0.0
STARTMODE=onboot
MTU=9000

Edited the /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
MASTER=bond0
SLAVE=yes
HWADDR=58:8D:09:0F:14:BE
TYPE=Ethernet
UUID=8bde8c1f-926f-4960-87ff-c0973f5ef921
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Edited the /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
MASTER=bond0
SLAVE=yes
HWADDR=58:8D:09:0F:14:BF
TYPE=Ethernet
UUID=6e2e7493-c1a1-4164-9215-04f0584b338c
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Next restart the network and you should have a bond. You may need to restart this after you configure the Nexus 5548 side.

service network restart

Nexus 5548 Top
Log in and create VPCs and stuff.  Also don’t forget to do the MTU 9000 system class.  I use this for jumbo frames in the data center.

policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multicast-optimize
system qos
service-policy type network-qos jumbo

One thing that drives me crazy is that you can’t do sh int po 4 to see that the MTU is 9000. From the documents, you have to do

sh queuing int po 4

to see that your jumbo frames are enabled.

The C210 is attached to ethernet port 1 on each of the switches.  Here’s the Ethernet configuration:

The ethernet:

interface Ethernet1/1
switchport mode trunk
switchport trunk allowed vlan 1,10
spanning-tree port type edge trunk
channel-group 4

The port channel:

interface port-channel4
switchport mode trunk
switchport trunk allowed vlan 1,10
speed 10000
vpc 4

As you can see VLAN 10 is the VSAN. We need to create the VSAN info for that.

feature fcoe
vsan database
vsan 10
vlan 10
fcoe vsan 10

Finally, we need to create the vfc for the interface:

interface vfc1
bind interface Ethernet1/1
switchport description Connection to NFS server FCoE
no shutdown
vsan database
vsan 10 interface vfc1

Nexus 5548 Bottom
The other Nexus is similar configuration.  The difference is that instead of VSAN 10, VLAN 10, we use VSAN20, VLAN 20 and bind the FCoE to VSAN 20.  In the SAN world, we don’t cross the streams.  You’ll see that the VLANS are not the same in the two switches.

Notice that in the below configuration, VLAN 20 nor 10 is defined for through the peer link so you’ll only see VLAN 1 enabled on the vPC:

N5k-bottom# sh vpc consistency-parameters interface po 4

Legend:
Type 1 : vPC will be suspended in case of mismatch

Name Type Local Value Peer Value
————- —- ———————- ———————–
Shut Lan 1 No No
STP Port Type 1 Default Default
STP Port Guard 1 None None
STP MST Simulate PVST 1 Default Default
mode 1 on on
Speed 1 10 Gb/s 10 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Admin port mode 1
lag-id 1
vPC card type 1 Empty Empty
Allowed VLANs – 1 1
Local suspended VLANs – – -

But on the individual nodes you’ll see that the VLAN is enabled in the VPC. VLAN 10 is carrying storage traffic.

# sh vpc 4

vPC status
—————————————————————————-
id Port Status Consistency Reason Active vlans
—— ———– —— ———– ————————– ———–
4 Po4 up success success 1,10

Success?

How do you know you succeeded?

N5k-bottom# sh flogi database
——————————————————————————–
INTERFACE VSAN FCID PORT NAME NODE NAME
——————————————————————————–
vfc1 10 0x2d0000 20:00:58:8d:09:0f:14:c1 10:00:58:8d:09:0f:14:c1

Total number of flogi = 1.

You’ll see the login. If not, then try restarting the interface on the Linux side. You should see a different WWPN in each Nexus. Another issue you might have is that the VLANS may be mismatched, so make sure you have the right node on the right server.

Let me know how it worked for you!

MediaWiki Installation on RedHat 5.5

In modern data center things like IPs, user accounts, passwords, and such that you used to keep in Excel spreadsheets should be rolled into the management tools.  That way, you always have the most current information.  Static word, excel and the like are old news.  Today you can see those things start to get rolled up into vCloud Director, OpenStack and others.  But for now, most people are still doing Excel spreadsheets.

This is stupid.  Please, At least use a wiki.  Catch up to 2005.

Media Wiki is one that I’ve used for years.  Its easy to install and do stuff and the syntax doesn’t take too long to learn.

Here’s how I set it up:

1.  Download Media Wiki on your Linux Server

Go to Media Wiki and download the latest stable.

cd /var/www/html
rm -rf *
wget http://download.wikimedia.org/mediawiki/1.21/mediawiki-1.21.1.tar.gz
tar zxvf media*
mv mediawiki-1.21.1/* .
rm -rf mediawiki-1.21.1

2.  Installing the Linux Environment

Get PHP and mysql installed on your server.  My server is a Red Hat 5.5 (yes, old )  virtual machine that I’ve had for about 2 years.  I haven’t updated to 6.x.  The easiest thing to do would be to install a new server.  CentOS 6.4 might be good, but a challenge every now and then is fun, yeah?  So to get it working, you have to have at least php 5.3.x.  To update I had to just update my OS.  Since I didn’t get my subscription set up right with Red Hat, I just figured I’d use CentOS to update.  That was pretty easy.  I just did this:

wget http://mirror.centos.org/centos/5/os/x86_64/CentOS/centos-release-5-9.el5.centos.1.x86_64.rpm
wget http://mirror.centos.org/centos/5/os/x86_64/CentOS/centos-release-notes-5.9-0.x86_64.rpm
rpm -ql -p centos-release-5-9.el5.centos.1.x86_64.rpm # just to see what was in it, yep, its got the repo!
rpm -Uvh centos-release-5-9.el5.centos.1.x86_64.rpm centos-release-notes-5.9-0.x86_64.rpm # install repos

From here, I removed my older versions of php. This is just:

rpm -qa | grep mysql
rpm -qa | grep php

Then I used some:

yum -y remove

Then I updated everything:

yum -y update

This took a while. Finished, came back. Everything updated. Now I installed the right packages:

yum -y install php53 php53-mysql msyql-server php53-xml

There may have been several other RPMs that you’ll need as dependencies, but that should get you started. That’s how we got up. Don’t forget to now enable mysql and restart apache:

service httpd restart
service mysqld restart
chkconfig –level 345 httpd on
chkconfig –level 345 mysqld on

3.  Configuring via the Web Interface

Once there, go to http://<yourserver>/

You should see:

4.  Creating Content

Going to the next page it’ll start asking you questions and eventually you’ll have yourself a wiki setup.  The thing I first started looking at doing was adding a table for IP addresses.  It ended up looking like this:

This is good and helps us to know where things are.  I started to create several pages for different VLANs. It could be updated, but I wish it was update in place.  Not the best, but ok for now.

 

5.  Editing Help

Go here: http://www.mediawiki.org/wiki/Help:Editing to see all the syntax to use to do cool formatting.

Finally, now you have yourself a wiki to keep things in. Welcome to 2005.  You are awesome.  No shared Excel spreadsheet with multiple outdated copies.  Now you just have to get everyone to buy into using it.  To do that: Be the example.  Use it, refer people to it.  Pretty soon they’ll catch on.

But there is a better way right?  What could that be?  The truth is, to manage effectively, you really need to integrate the information into your management toolset.  Much in the way UCS keeps track of BIOS versions, settings, VLANs, etc, you need some kind of tool that does that.  Today you can do that with OpenStack, vCloud Director, and some others.  I’m still not sold on any of them at this point but as I start to play with OpenStack more, I hope to give more guidance and thoughts.

SSH through proxy

Problem of the day is I have a computer that is on some local intranet that can not SSH out into the real world.  There is however a proxy server on my network that I can configure in my browser to get outside internet access…

But I want ssh.  So… after a bit of internet searching and then finally some nagging to a friend who knows this stuff better than I do, we came up with the following:

1.  Download connect.c

2.  Compile connect.c on your Linux server:

3.  Edit /etc/ssh/ssh_config by appending this last line:

4.  SSH normally to where you need to go.

That’s it!  Once you get SSH through then anything can happen.  Its the ultimate firewall poker.  Back doors, etc.  You just opened up Pandora’s box.