Category Archives: BL460c G6

ESXi 4.1 and HP BL460c G6 with Mezzenine card

Had an issue where I would install CentOS on these HP blades and I would be able to see 16 nics.  But when I installed ESXi 4.1 I only saw 8 nics.  16 is the right number because each flexNIC has 4 vNics.  So with 4 of these, I wanted to see some serious bandwidth.  After fumbling around we finally came to the conclusion that the be2net driver was not loaded on the hypervisor.

My Mezzanine card is a HP NC550m Dual Port Flex-10 10GbE BL-c Adapter.  My HP rep said that these were not going to be supported by HP on ESXi 4.1 until November and that I could drop back to 4.0 or he could try to get me some beta code.

I found that you can just download the driver here.  I tried a similar route by installing the hp-esxi4.1uX-bundle from HPs website but that just gave me stuff I didn’t need (like iLo drivers).

The link above is an ISO image.  The easiest way for me to install it on a running machine was to open the ISO on a linux machine and then copy the files to the ESX hosts:

Then you just need to install it.  The only problem with this is that it involves a entering maintenance mode and then a reboot.  Is this windows xp or something?  We’re just talking about a driver here…

Anyway, SSH to the ESXi 4.1 (or use VUG if you want to pay $500 bucks instead).  Since I use xCAT, I have passwordless SSH set up:

After the node reboots you can run:

you’ll be able to see all 16 nics.

Hope that saves you time as it took me a while to figure this out…

My next post will talk about how to integrate this into the kickstart file so you don’t have to do any after-the-install junk.

Expect and HP Virtual Connect configuration

Perhaps you are more patient than I am or enjoy the HP web interface for virtual connect.  I can’t stand it .  It makes me wait too long and when I have multiple chassis chained together creating server profiles takes forever.  In fact, I had to create them all then wait.. then find out it was wrong, blow it all away then start over.

So here’s a little expect script that I wrote that does it all for me.  This was done after I created the networks and profiles… but you can probably add that in as well:

after doing that I get all my blades configured without having to wait and type it all out.  I’m hopeful that HP allows ranges or something in the future to be able to assign profiles to more than one blade in the future.

Using the HP Array Configuration Utility CLI for Linux

This week I took part in an installation where we got a large amount of HP BL460c G6 blades sent to us.  One of the daunting tasks was to configure the RAID.  The normal thing I see is people waiting for the BIOS to pop up, press F8 or some other trickery of keystrokes to finally get to the RAID menu and configure it.  I’m cool doing this one time.  I might even do this two times.  But at some point a man has got to define a limit to doing mundane repetitive tasks that are better done by computers.

A good guy I know is a dude named Johnny.  He pointed me to this link of the hpacucli.  I still don’t know how he found the link.  His google-foo is better than mine I suppose.

This program can be installed on a Linux machine and then the RAID can be configured.  But you’re telling me:  ‘Chicken and Egg problem!’ How do you run a program on the OS to configure the RAID when you need an OS installed on the RAID to run the program?  Simple:  You netboot the machines with a stateless image so that the OS is in memory and doesn’t require hard drives.  Too bad for you that you probably don’t have xCAT.  Cause I do, and I use it without reservation.  And since I have it, it took me 5 minutes to create a stateless image that booted up on the servers. (I’ll tell you how to do that at the end of this little writeup).

Once the machine booted up I ran the command to get the status:

Probably nothing happening since I haven’t done anything yet.  So I took at a look at the physical drives:

So then I just made a RAID1 on those disks:

I rebooted the blade into ESXi4.1 kickstart and all was bliss.  But then I got even more gnarley.  I didn’t want to log into each blade and run that command.  So   I used xCAT’s psh to update them all:

Boom!  Instant RAID.  Now check them:

I’ve used this technique with IBM blades in the past as well.  Now all my blades are installed with ESXi 4.1 and I didn’t have to wait through any nasty BIOS boot up menus.  I’ve also automated this in the past by sticking this script in the

xCAT image creation for HP 460c G6

This is fairly easy.  First, create or modify the /opt/xcat/share/xcat/netboot/centos/compute.pkglist so that it looks like this:

Next, run ‘genimage’.  The trick with the HP Blades is to add the ‘bnx2x’ driver.  Once you’re done with this, install the hpacucli RPM in the stateless image:

Once this is done, run:

Then a simple:

will install the nodes to this image.  That’s it, then you can run the commands above to get the RAID set up.

Bonus points:  Then install ESXi 4.1 with xCAT.

State of xCAT on HP Blades

I had the opportunity this week to test drive xCAT on HP blades. I had a c7000 chassis with some spiffy BL460c G6s. The configuration is very straight forward. We’ve updated the xCAT Install Guide to include how to configure the blades and I think we’ll be doing a lot more.
Currently on these blades the following seems to work well:

  • getmacs
  • rinv

rpower works but there are some glitches where it doesn’t return status correctly.  We’ll be fixing that to make sure it does.  rpower <noderange> boot (which we rely on a lot) is non functional.  (Mostly I think because rpower off and on don’t work all the time as expected.)

rvitals is not set up either.

Its been good to see how xCAT is able to function on many vendors platforms.  I think this is one of the things that makes it uniquely positioned among data center management solutions is that it is able to excel in heterogeneous environment.  I hope this also dispels any myths that xCAT is an IBM product.  While its legacy is IBM, it has evolved into an open source project that can be used by many organizations desiring data center management without vendor hardware lock-in.