UCS: Rainbows, 6500s, and the Soul of a Server

My day job is to be an advocate for Cisco UCS in my customer’s data center.  Its a great gig.  Its much easier to back a product when you actually believe in it.  I thought I’d write down some of the ideas that I talk about with my customers on this blog.

Rainbows In the Data Center

Rack mount servers are still all the rage in many organizations.  And so is Gigabit Ethernet.  VMware best practices suggest that you have separate networks for management, vMotion, I/O, and VM traffic.  Using NIC Teaming you get something that looks like this beautiful picture:

 

(source:  Not sure, some Cisco person’s power point I stole)

You’ll notice that people color code these cables so they can tell which network goes to what.  The result is a beautiful rainbow flowing out of each server.  Then, Rainbow Brite and her sprite buddy Wink and her stallion Starlite can aid you in managing this big mess.

 

The account team at Cisco responsible for selling you switches loves this because for every cable you buy, you need to connect it to a switch port.  This is why we tell everybody that going with UCS is a strategic decision.  Do you want to continue investing in lots of Gigabit Ethernet Switches or consolidate with 10GbE?  UCS gets rid of the rainbows.  Yes, rainbows are pretty, but you don’t want them to get out of hand.  Rainbows do strange things to people (especially if you have more than one).  Sometimes they can be just too much, and too intense.

UCS gets you instant 10GbE as well as consolidation.  I did a comparison for a customer who was looking to buy a “pod” of rack mount servers and compared it to what it would take to buy the equivalent UCS.  Each “pod” consisted of 44 2-socket CPU servers.  Each server had 6 Gigabit Ethernet ports required as well as 2 HBAs to connect to the SAN.  The comparison was pretty eye opening.  By going with a UCS strategy the following benefits were realized:

  • 10% reduction in acquisition cost (some of this had to do with not having to buy new network switches)
  • 57% reduction in physical rack space
  • A dramatic difference in Ethernet cables required:  28 compared to 308
  • A dramatic difference in Fibre Channel ports required:  4 compared to 88

Think this can happen with Legacy Non-UCS blades?  Not so much.  There is a savings still, but nothing as dramatic.  So if you like dealing with more infrastructure, stringing cables, and configuring port policies on your network switches for every server in your environment, UCS may not be for you.

UCS is the new Catalyst 6000

I get the chance to walk into the belly of many data centers.  One common feature that we see there is the venerable Cisco Catalyst 6000 switch. Cisco has been milking this baby since 1999.

What makes this thing so successful?  Removable line cards and supervisor line cards.  As people have migrated from fast ethernet to gigabit Ethernet to 10 Gigabit Ethernet they’ve just upgraded the line cards or supervisor modules.  They made the strategic decision years ago to go with this and its working out great.

UCS Fabric Interconnects have a similar value proposition in the compute space.  You buy them as part of your strategy and then adding blades is just like adding remote line cards instead of fixed line cards in a 6509.  Going with this strategy provides several benefits:

  • Cost effective way of getting servers on line.  These Cisco servers are extremely price competitive and very attractive.  Once you have the infrastructure, adding blades is so cost compelling its hard to see the rational of not going with another blade when you have the chance.  This isn’t to say you’re necessarily locked into Cisco.  You still have options.  Its just that the other options are not as attractive any more.
  • Let’s suppose that 5 years from now Cisco decides it wants to start selling some esoteric micro servers in a new chassis or something.  (Full disclosure:  I have no idea if they are planning this and have seen nothing on the roadmap).  Let’s suppose that this new chassis has 100 slots for these servers.  If you bought any other blade, you’d need to throw out the old architecture and buy the new chassis.  With UCS, you just buy the new chassis and add these fun esoteric servers.  The Fabric Interconnects and the Fabric Extenders on the back will still work the same way.   In essence:  You’ve future proofed your architecture.

So what if everything goes 100GbE?  Fine, swap out Fabric Interconnects just like you swap out supervisor line cards on the 6500.  The architecture is brilliant.  What do you do with competitor solutions?   You have to throw out, start over.  The Fabric Interconnect architecture is just something that builds and has the ability to build as time goes on.

The Soul of a Server

In addition to the nice architecture of UCS, another compelling feature is how we manage UCS servers.  The idea is a bit different than how you did things in the past.  Back in the day, when you wanted to set up a new server, you would plug it in, hook it up to a crash cart, turn it on and then press F2  and do cool things like:

  • Tune BIOS settings
  • Set boot order
  • Program iSCSI interfaces
  • configure RAID

Then you might go through and do several updates.  Of course you wrote it all down right?  Its not hard for these things to get out of sync.  And guess what causes problems in application performance?  When the infrastructure you thought was homogeneous is not homogeneous.  This is a pain and takes a lot more time than people readily admit.

With UCS we do it differently.  Gone are the days of pressing F2.  I’ve never pressed F2 while a machine is booting on UCS Blades.  Here’s the new way:  There is a place in UCS Manager where all your wildest dreams and fantasies can come true.  This is where we logically define what we want our servers to look like.  We go through and say to ourselves:  In my fantasy world, if I could have a server, I’d want it to PXE boot, then boot to hard drive.  I’d want its BIOS settings to have hyperthreading.  I’d also like its firmware level to be 2.0(2) and I’d like its RAID to be set to RAID1 mirroring.

That’s exactly how you do it.  You define a template that has all the characteristics of the server you want and then spawn instances of it (called Service Profiles).  You then take those spawns, or service profiles and possess the hardware of the physical blade you assign it to.  Its like you create the soul of the server then give that soul a body.

This is pretty cool because now if you want to change something, you change it at the template and it can in turn update all the spawns of it.  You can create multiple templates for different types of servers, each optimized for the application that the server is supporting.  So you might have a service profile template for ESXi, a template for Oracle, a template for Windows bare metal, or a template for RHEV.  You are in the business of managing server souls.  Its more nobel.

These are just a few of the many benefits of UCS that I thought I’d write down.  There’s always situations where other products may be more applicable.  But UCS is definitely one to check out.  And just in case you missed it:  UCS is the 3rd best selling x86 blade server world wide after HP (1) and IBM (2).  In the US, UCS is ranked #2 behind HP.  Not bad for a server that’s only been on the scene since 2009.

Comments are closed.