Category Archives: HP

vSphere 5 Licensing fun

UPDATE (Aug 3, 2011): VMware changed the licensing model addressing many of the disadvantages laid out in this post. Among them: the monster 1TB VM will not be super expensive and the Enterprise+ license went up to 96GB of vRAM.

The new vSphere 5 licensing information has taken the VMware blogger world by storm.  So I thought I might join in the fun.

What’s happened?  VMware announced vSphere 5 on July 12, 2011.  With all its cool new features, is a new licensing model that is different from the old one.   So let’s examine how this effects your organization.  Most of the material I got from the vSphere Pricing PDF.

What the licensing change is

Before, the license was per CPU socket.  This cost was around $3495 for the Enterprise Plus license.  Now the cost is based on CPU sockets and variable amount of vRAM.  (VMware argues that vRAM is not the same as physical RAM).   One common example is that if you buy 2 Enterprise Plus licenses for a 2 socket system, you’ll get 96GB of vRAM available to use.

Why Licensing had to change

VMware’s white paper spins it as a good thing.  (no mystery why they did this).  But here is what I think the real reason is:  We’ve seen in the last decade a trend of adding more CPU cores per socket.  What started out as a dual core movement is now in less than 10 years at 12 cores per socket.  Imagine the next few years:  24 cores per socket?  64 cores per socket?  128 cores per socket?  Not all that hard to imagine.  Also consider the memory DIMMs:  16GB DIMMs, 32 GB DIMMS (those are already here!)  64 GB DIMMs… you get the idea.

So what does this mean for a software company like VMware that licenses per CPU socket?  It means that in several years, people won’t need as many CPU socket license because they’ll be able to put more VMs on a physical host, so they’ll need less socket licenses.  That means less money for VMware.

VMware like any business is in business to make money, and having profits eroded is not a good thing.  So they decided to take action now and make it so that innovations in physical hardware do not erode their earnings.

Who is this change bad for?

This is bad for customers right now who have 2 CPU socket systems with greater than 96GB of memory.  Hardware vendors like IBM and Cisco have been showing customers how they can save licensing costs on servers by adding more memory to existing 2 socket servers.  For example:

A system like the Cisco B250 which allows up to 384 GB of RAM will now require you pay for 8 licenses instead of 2. This is a huge penalty!

A system like the IBM HX5 which allows up to 256GB of RAM in a single wide blade with two CPU sockets will require  you to pay for 6 licenses instead of 2.

This is also bad for people that over allocate Virtual memory.  Its common practice for people to overcommit memory.   Since you’re paying for vRAM this means that you may be potentially paying more.  This is funny because VMware in the past has said overcommitting memory is one of the good reasons to move to virtualization.

Its also bad for people with Virtual Machines with large memory.  VMware announced that they can now create a VM with 1TB of memory!  That’s great news, but consider the cost in vRAM:  That’s 21 Enterprise Plus licenses at $3,495 = $73,395!  Probably cheaper to use a real machine for that one?

This is bad for people who use Citrix Xen Desktop with VMware vSphere on the backend.  Its interesting that there is a completely different pricing model for people using VMware View with vSphere.  I think this licensing issue will force people to make a choice:  Go with VMware View or go Xen Desktop with Xen hypervisor on the backend.

The above may not be entirely true.  There is also a new product SKU called vSphere 5 Desktop that allows for 100 VM desktops priced at $6500.  This license supersedes the vSphere license so if you run Desktop VMs then your cost for the higher memory systems like the Cisco B250 would not go up.

Who is the change is good for?

Well for now its good for the standard 2 socket systems that have less than 96GB of RAM like the HP Proliant BL2x220c G7 blade which allows up to 96GB of RAM per server instance will require 2 per server, so 4.  No change here.  Same with the Cisco B200s with 16GB DIMMS which goes up to 96GB with 16GB DIMMS.  The problem is this won’t last for long.  As I mentioned before, RAM and CPU core density will only increase meaning you’ll have to pay more for the licenses.  16GB DIMMs will get cheaper and x86 processors will allow more DIMM slots in the future.  (The Cisco B250 already allows 48 DIMM slots on a 2 socket system).

The VMware marketing literature states that the vRAM can be pooled.  So if one system in your datacenter has only 16GB of RAM on a 2 socket system, then you’ll get all kinds of vRAM that you can use on other systems that may have more memory.

A proposed better solution

I’m not opposed to VMware making more money.  As physical server capabilities increase, VMware wants to get more money out of its utilization.  Even though the prices of physical servers will most likely remain the same, it appears VMware software will scale with the number of VMs deployed.

I would propose that there be vRAM entitlements decoupled from the CPU cores that cost less money.  For example, instead of paying for 8 vSphere licenses for a Cisco B250, why not make us pay for 2 vSphere 5 licenses plus 6 48GB vRAM entitlements that are priced at a different rate?  Make these vRents (vRAM entitlements) 1/4th the cost of the socket license?  Here’s the analysis of the B250:

vSphere 4.1 license:  2 licenses: $6,990 (2x$3495)

vSphere 5.0 license: 8 licenses: $27,960 (8x$3495)

my proposal:  vSphere 5.0 license + vRents: $12,233 = $6,990 (2x$3495) + $5,243 (6x$873)

I don’t like that its double the cost of the previous vSphere 4.1 but at least its better than the 400% increase of what we had before.

The Future?

VMware has changed pricing models before.  Just ask my friends in the Data center space where they now have to pay for hosting solutions.  But VMware is so far ahead of anyone else right now, its hard to know what type of backlash they will get.  Sure people will grumble when they need to pay more, but most likely people will just pay it.  So Hyper-V and Xen and KVM, what do you have to say to all of this?  Its your move!

ESXi 4.1 and HP BL460c G6 with Mezzenine card

Had an issue where I would install CentOS on these HP blades and I would be able to see 16 nics.  But when I installed ESXi 4.1 I only saw 8 nics.  16 is the right number because each flexNIC has 4 vNics.  So with 4 of these, I wanted to see some serious bandwidth.  After fumbling around we finally came to the conclusion that the be2net driver was not loaded on the hypervisor.

My Mezzanine card is a HP NC550m Dual Port Flex-10 10GbE BL-c Adapter.  My HP rep said that these were not going to be supported by HP on ESXi 4.1 until November and that I could drop back to 4.0 or he could try to get me some beta code.

I found that you can just download the driver here.  I tried a similar route by installing the hp-esxi4.1uX-bundle from HPs website but that just gave me stuff I didn’t need (like iLo drivers).

The link above is an ISO image.  The easiest way for me to install it on a running machine was to open the ISO on a linux machine and then copy the files to the ESX hosts:

# mkdir foo
# mount vmware-esx-drivers-net-be2net_400.2.102.440.0-1vm* foo -o loop
# cd foo/offline-bundle
# scp vhost001:/

Then you just need to install it.  The only problem with this is that it involves a entering maintenance mode and then a reboot.  Is this windows xp or something?  We’re just talking about a driver here…

Anyway, SSH to the ESXi 4.1 (or use VUG if you want to pay $500 bucks instead).  Since I use xCAT, I have passwordless SSH set up:

# vim-cmd hostsvc/maintenance_mode_enter
# esxupdate update --bundle /
# vim-cmd hostsvc/maintenance_mode_exit
# reboot; exit

After the node reboots you can run:

esxcfg-nics -l

you’ll be able to see all 16 nics.

Hope that saves you time as it took me a while to figure this out…

My next post will talk about how to integrate this into the kickstart file so you don’t have to do any after-the-install junk.

Expect and HP Virtual Connect configuration

Perhaps you are more patient than I am or enjoy the HP web interface for virtual connect.  I can’t stand it .  It makes me wait too long and when I have multiple chassis chained together creating server profiles takes forever.  In fact, I had to create them all then wait.. then find out it was wrong, blow it all away then start over.

So here’s a little expect script that I wrote that does it all for me.  This was done after I created the networks and profiles… but you can probably add that in as well:

#!/usr/bin/expect -f
set timeout -1 # the HP stuff takes too long to return!
spawn ssh myname@
expect "*?assword:*"
send -- "secretpassword\r"
send -- "\r"

for { set i 1 } { $i < 17 } { incr i 1 } {
  expect "*->"
  send "set enet-connection oa3blade$i 1 network=oa3flexup1\r"
  expect "*->"
  send "set enet-connection oa3blade$i 2 network=oa3flexup2\r"
  expect "*->"
  send "add enet-connection oa3blade$i network=oa3flexup3\r"
  expect "*->"
  send "add enet-connection oa3blade$i network=oa3flexup4\r"
  expect "*->"
  send "assign profile oa3blade$i enc0:$i\r"


after doing that I get all my blades configured without having to wait and type it all out.  I’m hopeful that HP allows ranges or something in the future to be able to assign profiles to more than one blade in the future.

Using the HP Array Configuration Utility CLI for Linux

This week I took part in an installation where we got a large amount of HP BL460c G6 blades sent to us.  One of the daunting tasks was to configure the RAID.  The normal thing I see is people waiting for the BIOS to pop up, press F8 or some other trickery of keystrokes to finally get to the RAID menu and configure it.  I’m cool doing this one time.  I might even do this two times.  But at some point a man has got to define a limit to doing mundane repetitive tasks that are better done by computers.

A good guy I know is a dude named Johnny.  He pointed me to this link of the hpacucli.  I still don’t know how he found the link.  His google-foo is better than mine I suppose.

This program can be installed on a Linux machine and then the RAID can be configured.  But you’re telling me:  ‘Chicken and Egg problem!’ How do you run a program on the OS to configure the RAID when you need an OS installed on the RAID to run the program?  Simple:  You netboot the machines with a stateless image so that the OS is in memory and doesn’t require hard drives.  Too bad for you that you probably don’t have xCAT.  Cause I do, and I use it without reservation.  And since I have it, it took me 5 minutes to create a stateless image that booted up on the servers. (I’ll tell you how to do that at the end of this little writeup).

Once the machine booted up I ran the command to get the status:

# hpacucli ctrl slot=0 logicaldrive all show status

Probably nothing happening since I haven’t done anything yet.  So I took at a look at the physical drives:

# hpacucli ctrl slot=0 pd all show              

Smart Array P410i in Slot 0 (Embedded)


 physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
 physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)

So then I just made a RAID1 on those disks:

# hpacucli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2 raid=1

I rebooted the blade into ESXi4.1 kickstart and all was bliss.  But then I got even more gnarley.  I didn’t want to log into each blade and run that command.  So   I used xCAT’s psh to update them all:

# psh vhost004-vhost048 'hpacucli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2 raid=1'

Boom!  Instant RAID.  Now check them:

# psh vhost003-vhost048 'hpacucli ctrl slot=0 logicaldrive all show status'
vhost003:    logicaldrive 1 (136.7 GB, RAID 1): OK
vhost005:    logicaldrive 1 (136.7 GB, RAID 1): OK
vhost004:    logicaldrive 1 (136.7 GB, RAID 1): OK

I’ve used this technique with IBM blades in the past as well.  Now all my blades are installed with ESXi 4.1 and I didn’t have to wait through any nasty BIOS boot up menus.  I’ve also automated this in the past by sticking this script in the

xCAT image creation for HP 460c G6

This is fairly easy.  First, create or modify the /opt/xcat/share/xcat/netboot/centos/compute.pkglist so that it looks like this:


Next, run ‘genimage’.  The trick with the HP Blades is to add the ‘bnx2x’ driver.  Once you’re done with this, install the hpacucli RPM in the stateless image:

# rpm -Uivh hpacucli-8.60-8.0.noarch.rpm -r /install/netboot/centos5.5/x86_64/compute/rootimg

Once this is done, run:

# packimage -p compute -a x86_64 -o centos5.5

Then a simple:

# nodeset vhost001 netboot=centos5.5-x86_64-compute
# rpower vhost001

will install the nodes to this image.  That’s it, then you can run the commands above to get the RAID set up.

Bonus points:  Then install ESXi 4.1 with xCAT.

State of xCAT on HP Blades

I had the opportunity this week to test drive xCAT on HP blades. I had a c7000 chassis with some spiffy BL460c G6s. The configuration is very straight forward. We’ve updated the xCAT Install Guide to include how to configure the blades and I think we’ll be doing a lot more.
Currently on these blades the following seems to work well:

  • getmacs
  • rinv

rpower works but there are some glitches where it doesn’t return status correctly.  We’ll be fixing that to make sure it does.  rpower <noderange> boot (which we rely on a lot) is non functional.  (Mostly I think because rpower off and on don’t work all the time as expected.)

rvitals is not set up either.

Its been good to see how xCAT is able to function on many vendors platforms.  I think this is one of the things that makes it uniquely positioned among data center management solutions is that it is able to excel in heterogeneous environment.  I hope this also dispels any myths that xCAT is an IBM product.  While its legacy is IBM, it has evolved into an open source project that can be used by many organizations desiring data center management without vendor hardware lock-in.