Category Archives: VMware

Cisco, Cluster on Die, and the E5-2600v3

I was in a meeting where someone noted that VMware does not support the Cluster on Die feature of the new Intel Haswell E5-2600v3 processors.  The question came: “What performance degradation do we get by not having this supported?  And would it be better to instead use the M3 servers”

This information they got from Cisco’s whitepaper on recommended BIOS settings for the new B200 M4, C240 M4, and C220 M4 servers.

The snarky answer is: You’re asking the wrong question.  The nice answer is: “M4 will give better performance than M3″.

Let’s understand a few things.

1.  Cluster on Die (CoD) is a new feature.  It wasn’t in the previous v2 processors.  So assuming all things equal, running the v3 without it just means that you don’t get whatever increased ‘goodness’ it will provide.

2.  Understand what CoD does.   This article does a good job pointing out what it does.  Look at it this way:  As each socket gets more and more cores in each iteration of Intel’s new chips, you have a lot of cores looking into the same memory bank.  It starts making it so latency goes up to find the correct data.  CoD carves up the regions so data to the cores stay somewhat more coherent.  Almost like giving each core its own bank of cache and memory.  This helps the latency go down as more cores are added.

To sum it up:  If I have a C240 M3 with 12 cores per processor and I compare it to a C240 M4 with 12 cores per processor with CoD disabled, then I still have the same memory contention problem with the M4 cores that I did with the M3s.  When VMware eventually supports CoD, then you just get a speed enhancement.

 

Beyond Virtual Desktop Infrastructure

I wrote a blog a few days ago that I wanted to modify because I didn’t get it right.  First of all, please note that everything I write here are my own thoughts and not those of my employer.

This article is about Virtual Desktop Infrastructure (VDI), end user computing, Desktop As a Service (DaaS), or whatever you want to call it.  Its very relevant to many organizations today and there are a lot of great solutions and people very vested in it.  Is this the year of the virtual desktop?  It is to some people!  To other people, it was 4 years ago and what’s the big deal?  But to some organizations, its not going to happen ever because there’s no use case.

What problems VDI solves

Let’s think about the problems VDI solves.  It gives us our enterprise environment remotely and allows Desktop support to control the image that workers get.  That’s what it does, not the problem it solves.  The problem it solves is giving us our enterprise applications anywhere.  You see, many of us could care less about having our mandated enterprise environment.  When I worked for my formal employer, the first thing I did when I got my corporate issued laptop was to promptly erase their blessed image install the whole thing from scratch.  Wipe it out, get rid of employer stuff and put Linux on it.  Then I was in control.  Then I’d worry about getting the apps on that I needed and used and not everything else that I didn’t need.

Desktop support probably didn’t like that, but I never called or used them and they never called or used me.  I got the apps I wanted and whenever I gave a presentation, I never had a little window at the bottom prompt me that my computer needed to reboot in 10 minutes to install some extremely important updates.  We lived separate happy lives.

Desktop support is not evil.  They need to control the operating system image to ensure the applications could run and run securely.  Plus they aren’t catering to people like me.  They’re catering to people who just want to get things done and not mess with things like I do.   So when you look at what VDI is today, its extending Desktop Support’s control into a virtual image.  I think this is great!  Then I can run my own image and whenever I need my corporate apps, I can log into a VDI image.  Perfect.

Why VDI is temporary for most Enterprises

But, VDI in most cases is a patch, or a temporary solution to getting today’s legacy applications to enterprise users.  Here’s where it works very well:  If you have an application that was written for Windows XP or Windows 7, then creating a virtual desktop to serve those apps can be very effective.  But applications have changed.  Most of my applications I use are web based.  I still use Excel,  and PowerPoint, but I store those now in Box that my company provided me as a secure place to put them.  (Think: DropBox for corporations)

My Desktop support is now application support.  They make applications available to me and I can use whatever device I want to access them.   They now have even greater control:  When my corporate support team updates our configuration tool that I use to create build of materials (BoM) for my customers, they control upgrades and revisions.  I never have to do it on my laptop.  Even if I liked the old way better, I have no control.  Application support now has more control than ever and ensures no one is running older apps.  Its great!  (You may have seen people complain against the new Facebook layout in the past.  Nothing they can do, because its not an app they run on their desktop)

Applications continue to migrate this way.  No one is continuing to build the next great desktop application.  They’re looking to make applications that run anywhere on anything.  Even Microsoft Office runs on my iPad!

In fact, if you look at it:  Desktop support (newly rechristened as : Application support) is actually getting more control while I feel like I’m getting more control!  What a great arrangement for two type-A personalities.

Skipping VDI 

I thought about this a little bit over the last few years but it wasn’t made super clear to me until about 2 weeks ago.  I happened upon a visit to a little known school district in the mountains of Utah.  Davis County school district  is the most advanced public school district I have ever seen.  I was blown away.  We started out talking about their applications and data center plans.  Mark Reid, the IT director, and several of his coworkers have been at the school district for the last 30 years.  Its a testament to see what the power of vision and long standing partnerships can achieve.  From the very beginning they’ve been writing their own applications to deliver IT services to the district.

Unlike many of the IT shops that I work with, Davis County employees a staff of developers that churn out their own applications for the school district.  From payroll, to financial, to grades they are doing it.  In fact, they even have an application myDSD that allows you to log in from the web or even on your iPhone or Android to check grades, notify if your child will be absent from school, and pretty much anything else you might need from a school district.  Wow.   I bet your school district doesn’t have anything like that.

The applications speak to each other through different software layers and protocols but they all come back to an Oracle RAC cluster.  This is where all the data is consolidated and backed up. They’ve already got Office 365 for the students out there.

During our meeting, one of the people in the room asked if Davis County School District was thinking about VDI.  Before Mark could answer, I already knew:  They didn’t have legacy apps.  There was no reason to deliver a virtual desktop.  All the applications could be accessed from the web or iOS/Android clients.  You see, if you already have apps that can live anywhere, you don’t need to serve a special desktop image.

The real problem they need to solve is a way to stitch together distributed data centers and develop a plan to source workloads to different clouds.

Where VDI will always be important

VDI is still and will be important to many organizations.  After all, it sure is better than installing and managing a bunch of desktops in a computer lab.  People still have legacy apps and there are license restrictions that may make you have to do it on a blessed desktop image.

But one area that is really growing the use of VDI to share powerful GPUs for heavy graphics applications.  As data continues to explode, visualizing it will be ever more important.  That is why I don’t see an obvious replacement or better way to do this.

Implications 

How do you think this transition of applications being centered on a desktop to being cloud enabled will effect the future? Back around 2006 when I was a remote worker at IBM they announced we could no longer expense our internet service.  They reasoned that most homes had this anyway and besides it was a great way for IBM to cut cost.   HP followed and so did others.   Soon all the tech companies started to do it.  Most companies don’t pay for remote access even though a significant amount of employees work from home.  (source:  my friends)

Today most companies will issue laptops to their knowledge workers.  Its great and they refresh every few years.  But could there be a time when employers say:  You already have a device (computer,  iPad, etc) we don’t need to pay for that anymore.  Just use our VPN service to get your applications and you are good.   Perhaps instead what they would do is give us an allowance of money that could be spent on a machine.

I don’t think this will happen at my employer soon because a nice laptop machine is a nice perk that makes employees happy.  But what about the universities and schools?  Would they eventually just shut down the computing labs and mandate all students bring their own?  Probably not for the engineering/art ones as I discussed above where they need GPUs.  But My friend’s kid 4 years ago went to a private school.  It was mandated  that  every student get a Mac Book.    The days may not be far off.

So next time you are evaluating whether or not this is the year of the virtual desktop, first look at your strategy for delivering applications anywhere.  Perhaps resources should be diverted towards new applications or BYOD initiatives to get off the legacy applications that are tying you down.  Remember:  Nobody wants or cares about your enterprise desktop image (nobody in their right mind).  They just want applications that work and allow them to get things done.

 

Configure VMware from scratch without Windows

One of the things that bugs me about vCenter (still) is that it is still very tied to the Windows operating system.  You have to have Windows to set it up and trying to go about without Windows is still somewhat difficult.  In my lab I’m trying to get away from doing Windows.  I have xCAT installed to PXEboot UCS Blades to do what I want.  Its great, and its automated.  But when I installed 8 nodes to be ESXi hosts I quickly realized I needed vCenter to demonstrate this and use this as others would.

That requires vCenter.  VMware has had the vCenter appliance out for a few years now.  It runs on SLES and comes preconfigured.  The only problem is installing it when you have no vCenter client because today those clients are only made for the Windows Operating system.  How to get around this?

ovftool was the thing I found that did the job for me.  I found the link by reading the ever prolific Virtual Ghetto post on deploying ovftool on ESXi.  Since I had Linux, installing ovftool on the ESXi host wasn’t necessary for me.  Instead I just installed it on my Linux server (with some trouble since it deploys this stub and you have to make sure you don’t modify the file).

I ran the command:

ovftool -ds=NFS1 VMware-vCenter-Server-Appliance-5.0.5201-1476389_OVF10.ova vi://root:password@node01

After that, I watched my DHCP server and saw that it gave the vCenter appliance the IP address of 172.20.200.1.  Hopefully you have DHCP or you might be hosed.

Then after finding the docs, I intuitively opened my web browser to https://172.20.200.1:5480. (everyone knows that port number right?) I then logged in with user ‘root’ and password ‘vmware’ and started the auto setup.  After changing the IP address and restarting the appliance I was pretty golden.

Once configured, log into the appliance at https://172.20.1.101:9443/vsphere-client/ and then be stoked that you have flash player already installed and that it works.  Oh you didn’t have flash player installed on your linux server?  That sucks, I didn’t either.  Guess that’s another hoop we have to jump through. But wait, then you find that Flash 11.2.0 is the last Flash that has been released for Linux.  Guess what?  VMware requires Flash version 11.5.  Nice.

https://communities.vmware.com/message/2319263

At this point I just copied a Windows VM that I had laying around and started managing it from there.  The moral of the story is that you can’t do a Windows free VMware environment.  Sure, I could have done fancy scripting and managed it all remotely with some of their tools, but if I’m going to be doing all that, why should I pay for VMware?  I’d be better off just doing straight native KVM.  YMMV.

Nexus 1000v – A kinder gentler approach

One of the issues skeptical Server Administrators have with the 1000v is that they don’t like the management interface being subject to a virtual machine.  Even though the 1000v can be configured so that if the VSM gets disconnected/powered-off/blownup the system ports can still be forwarded.  But that is voodoo.  Most say:  Give me a simple access port so I can do my business.

I’m totally on board with this level of thinking.  After all, we don’t want any Jr. Woodchuck network engineer to be taking down our virtual management layer.  So let’s keep it simple.

In fact!  You may not want Jr. Woodchuck Networking engineer to be able to touch your production VLANs for your production VMs.  Well, here’s a solution for you:  You don’t want to do the networking, but you don’t want the networking guy to do the networking either.  So how can we make things right?  Why not just ease into it.  The diagram below, presents, the NIC level of how you can configure your ESXi hosts:

Here, is what is so great about this configuration.  The VMware administrator can use things “business as usual” with the first 6 NICs.

Management A/B teams up with vmknic0 with IP address 192.168.40.101.  This is the management interface and used to talk to vCenter.  This is not controlled by the Nexus 1000v.  Business as usual here.

IP Storage A/B teams up with vmknic1 with IP address 192.168.30.101. This is to communicate with storage devices (NFS, iSCSI).  Not controlled by Nexus 1000v.  Business as usual.

VM Traffic A/B team up.  This is a trunking interface and all kinds of VLANs pass through here.  This is controlled either by a virtual standard switch or using VMware’s distributed Virtual Switch.  Business as usual.  You as the VMware administrator don’t have to worry about anything a Jr. Woodchuck Nexus 1000v administrator might do.

Now, here’s where its all good.  With UCS you can create another vmknic2 with IP address 192.168.10.101.  This is our link that is managed by the Nexus 1000v.  In UCS we would configure this as a trunk port with all kinds of VLANs enabled over it.  This can use the same VNIC Template that the standard VM-A and VM-B used.  Same VLANs, etc.

(Aside:  Some people would be more comfortable with 8 vNICs, Then you can do vMotion over its own native VMware interface.  In my lab this is 192.168.20.101)

The difference is that this IP address 192.168.10.101 belongs on our Control & Packet VLAN.  This is a back end network that the VSM will communicate with the VEM over.  Now, the only VM kernel interface that we need to have controlled by the Nexus 1000v is the 192.168.10.101 IP address.  And this is isolated from the rest of the virtualization stack.  So if we want to move a machine over to the other virtual switch, we can do that with little problem.  A simple edit of the VMs configuration can change it back.

Now, the testing can coexist on a production environment because the VMs that are being tested are running over the 1000v.  Now you can install the VSG, DCNM, the ASA 1000v, and all that good vPath stuff, and test it out.

From the 1000v, I created a port profile called “uplink” that I assign to these two interfaces:

port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,501-512
channel-group auto mode on mac-pinning
no shutdown
system vlan 505
state enabled

By making it a system VLAN, I make it so that this control/packet VLAN stays up. For the vmknic (192.168.10.101) I also created a port profile for control:

port-profile type vethernet L3-control
capability l3control
vmware port-group
switchport mode access
switchport access vlan 505
no shutdown
system vlan 505
state enabled

This allows me to migrate the vmknic over from being managed by VMware to being managed by the Nexus 1000v. My VSM has an IP address on the same subnet as vCenter (even though its layer 3)

n1kv221# sh interface mgmt 0 brief

——————————————————————————–
Port VRF Status IP Address Speed MTU
——————————————————————————–
mgmt0 — up 192.168.40.31 1000 1500

Interestingly enough, when I do the sh module vem command, it shows up with the management interface:

Mod Server-IP Server-UUID Server-Name
— ————— ———————————— ——————–
3 192.168.40.102 00000000-0000-0000-cafe-00000000000e 192.168.40.102
4 192.168.40.101 00000000-0000-0000-cafe-00000000000f 192.168.40.101

On the VMware side, too, it shows up with the management interface: 192.168.40.101

Even though I only migrated the 192.168.10.101 vmknic over.

This configuration works great.  It provides a nice opportunity for the networking team to get with it and start taking back control of the access layer.  And it provides the VMware/Server team a clear path to move VMs back to a network they’re more familiar with if they are not yet comfortable with the 1000v.

Let me know what you think about this set up.

Storage: The new sexy

I was fortunate enough to attend VMworld 2012 in San Francisco this year!  It is indeed a great privilege and I can’t thank Cisco enough for sending me.  There were lots of announcements that were pretty cool from a UCS standpoint like UCS Central and the vCenter plugin. (These were demoed in the Cisco booth) There were cool announcements from VMware about Horizon and the Cloud Suite.  etc.   The sessions were great and the after hours events always entertaining (Although the food at the VMware party sucked compared to 2010).  But among the madness of the myriad of messages there was one that stood out to me more than anything else:  Flash Storage.

Remember when storage was boring?  EMC, NetApp, blah blah blah.  Well those days are gone.  Its not so simple anymore.  The shear number of storage vendors on the showroom floor was a clear illustration that storage is still an open frontier like the wild west, or even like a new season of American Idol… (OK, maybe that last one was a bad analogy)

Sure EMC still leads in market share and NetApp is the fastest growing, but there is plenty of room for disruption.  There were several really good sessions on best practices for storage.  One of my favorite quotes came from Duncan Epping (of YellowBricks and Clustering Deep dive author fame):

We always blame the network for our problems but its usually the storage that is at fault

Indeed we see this in practice quite a lot.  People can’t buy more UCS because they’re constrained by their storage.  The network seems easy enough to blame but as server administrators are getting more comfortable with networking (mostly because they have to since the network access layer is now inside the server at the vSwitch) they’re starting to get that right much more often than the storage.

One of my favorite sessions was a Gartner Storage Best Practices session.  They had the following great messages:

“IOPS resolution is a multi dimensional problem that may not best addressed in the storage array”

This is why putting Fusion-io cards in a server may help with a tiered approach.  (BTW, these were announced to be inside UCS Blades now and should be available before the end of 2012).  It also explains why companies like Atlantis Computing with their ILIO product can offer big performance gains by offloading some of the work the storage array has to do as well as saving space.

This brings up another point from the Gartner session when talking about VDI (aka SHVD: server hosted virtual desktop)

“Storage costs present the #1 barrier to entry”

If you’re wondering when the year of the virtual desktop will come its when the organization has to buy new storage.  Over 40% of data center budget for new equipment goes into storage.  It pretty much pushes server and network equipment to the fringes.  That’s why I don’t ever think there will be a ‘year’ of the virtual desktop.  Instead, we’re in the beginning of the ‘decade of the virtual desktop’

The Tiered approach works as follows:  You have fast disks ( SSDs), slower disks (FC, SAS) and then the slowest disks:  SATA spinning at 7200 RPMS.  Ideally you want the data you use the most (the master copy of a windows VM) sitting on the fast storage, swap files on the mid tier, and lesser used workloads sitting down at the bottom.

The issue with the tiered approach (which NetApp doesn’t really have except for maybe the flash cache which is read only) is that you have to put the workloads where you think they will be.  And there’s a good chance you’ll get it wrong.

And that’s one reason there’s such a huge market out there for storage products that solve the issues of how to store and manage virtual machine files.  It used to be the OS sat on the server and the local disk was the best you could do.  Now with all of those VMs contending for IOPS, the storage is the bottleneck.  The new sexy bottleneck.  Sexy because there’s a lot of money to be made if you can convince people your solution is the best.

Using flash storage seems to be the sexy way to entice your customers.  SSD arrays were all the rage at VMworld.  Tiered solutions that allowed SSDs with FC/SAS/SATA were also quite popular.   I thought I’d go through some of the storage solutions I had a chance to visit with at VMworld.  Most of these are lesser known so it will be interesting to watch and see how the space changes in the next year:

Violin Memory

The Violin 6000 Series flash memory array can do 1 million IOPS with 4Gbps of bandwidth in a 3RU space.  The secret sauce is that they build their own flash memory controllers instead of using the standard SSDs that most flash array vendors use.  This probably isn’t the cheapest but its hard to beat in terms of speed.  This is the storage you buy when money is no object and you just want fast.  Just imagining this connected to some Nexus 5000s and a UCS full of B200 M3s makes me all tingling inside.

Whiptail

I didn’t see Whiptail on the floor but I have helped 2 UCS customers configure this and get it running.  Its cheaper and wicked fast, but offers little intelligence as to what is happening.  Just cram a bunch of SSD drives into the array and lets you go to town.  For some people, speed is all you need and they don’t care about fancy dashboards.

Tintri

I had a great conversation with a great guy at Tintri at the vFlipCup event on Monday night… Wish I could remember his name.  (BTW, I represented Team JBOD with @CiscoServerGeek and we apparently wrote checks that our team could not cache).   …But I digress… Tintri seemed to be a mix of Atlantis Computing and Whiptail.  Instead of having a VM do the block caching like ILIO does, you instead have that intelligence take place on the controller.  Couple that with an array of SSDs and life looks pretty good.  This seems better than Atlantis in that your chance of VM going down is greater than your chance of the storage array going down.

Pure Storage

Two of the bigger booths of storage companies that I hadn’t heard of were Pure Storage and Tegile.

Pure Storage is a flash array vendor that seemed to have the most sophisticated protocol support including iSCSI (BTW: for the record I really don’t like iSCSI.  When you have UCS, Fibre Channel is so easy and with the UCS Del Mar release later this year you have all the components you need for FCoE without buying any separate equipment)

Tegile

Tegile is a hybrid array that supports tiered storage.  The value is the deep integration with SSDs.  I would look at this offering as a less confusing EMC offering.  It offers SAN and NAS capabilities as well as data deduplication.  Pretty sweet system.

Summary

It seems like all the providers have some great niches and I think most people would be happy with any of these storage solutions.  I’d hate to leave this post without tipping my hat to EMC and NetApp who I work with and do a tremendous job.  There’s a reason they both have so many customers:  They build great products.  I should also call out Hitachi Storage.  Their own team admits they suck at marketing, but in terms of performance and reliability for mission critical apps, its hard to beat their rock solid solutions.  Its truly a company built and run by engineers.  That’s one reason their customers and me like them so much.

So if this post makes you feel all warm inside, that’s because storage is the new sexy and it is good for you to look at on company time.

An adventure in Powershell and the vSphere PowerCLI

I’ve always found a good way to get started with a new language is to just go head into it trying to solve some problem you’re working on, then ask the Internet via a search engine how to do something you want to do.

Today I started for the first time with PowerShell. Here’s the problem I was trying to solve:

I have a lot of VMs that I want to bring up for a class that I’m giving. There could be between 10-50 people at any given class. What I need to do is clone a master VM from a template, change its MAC address to something I have reserved in my DHCP server, and then power it up.

All of this can be done via vCenter.  But its a slow and painful point-and-click-yourself-to-death process. Automation is the way to get this done.

All of this can be done via xCAT.  But I figured why not give it a try.  Other people can live without xCAT.  Maybe I can too.

I have a history of perl. Yes, I have lots of skeletons in the closet with that language. So I was going to do all of this with Perl. But I figured that since I was going to be doing power shell scripts anyway for the UCS emulator portion, why not just do it all with the same thing? So I gave it a shot.

First off, the Windows editors suck. I had to stick with VIM because there’s nothing better for me. Let’s not even argue about that. +1 for me for retaining my dignity.

Next, after installing the VMware PowerCLI tool it was pretty easy.

CreateVMs.ps1

Here’s the script to clone 9 VMS from a template called UCSPEMaster.  I just change it depending on how many I need.

Notice that each machine will be called UCSPEXX, where XX is the range I specify. That way if
4 more people walk into the class after I’ve configured 14, then I can do 15..8.

I also put all of them on the same host (192.168.1.4), the same datastore (datastore1), the same Folder (UCS Emulators), and made them thin clones.

If you have a cluster running Storage DRS (vSphere 5.0+) then you don’t have to specify the datastore and DRS will put it where it sees fit.

ConfigVMs.ps1

The only thing I need to do now is change the MAC address to something that I have reserved.  That way, I can tell the user to just log into the IP address that I’ve set up beforehand.  I’ve configured 60 IP addresses so that I’m ready for a big class.

Most of the script is made up of getting the last number part of the VM name. Since each VM is named UCSPE01-UCSPE60 then it grabs the 60, converts it to hex, then adds that as the last part of the MAC address of the adapter. This will work as long as I don’t have more than 255 VMs.

I could have put a Start-VM on the end of this.  Generally, that’s just a powershell one-liner:

removeVMs.ps1

The last script will just remove these VMs.  As I tweak around with them, or as people in the class tweak around with them, I just want to erase them and start fresh for the next class.  This is fairly easy:

The only thing I need to update this with is to not power it off if it isn’t on.  It throws some nasty errors if it’s not powered on.  But I’ll do that some other time.

I’ll get this lab working with PowerShell.  It’s not a bad language.  It’s another tool in the handbag.  I still prefer command line scripting with Bash or Perl.  But every now and then its fun to go over and see how the other side lives.  Now, back to xCAT.

vSphere 5 Licensing fun

UPDATE (Aug 3, 2011): VMware changed the licensing model addressing many of the disadvantages laid out in this post. Among them: the monster 1TB VM will not be super expensive and the Enterprise+ license went up to 96GB of vRAM.

The new vSphere 5 licensing information has taken the VMware blogger world by storm.  So I thought I might join in the fun.

What’s happened?  VMware announced vSphere 5 on July 12, 2011.  With all its cool new features, is a new licensing model that is different from the old one.   So let’s examine how this effects your organization.  Most of the material I got from the vSphere Pricing PDF.

What the licensing change is

Before, the license was per CPU socket.  This cost was around $3495 for the Enterprise Plus license.  Now the cost is based on CPU sockets and variable amount of vRAM.  (VMware argues that vRAM is not the same as physical RAM).   One common example is that if you buy 2 Enterprise Plus licenses for a 2 socket system, you’ll get 96GB of vRAM available to use.

Why Licensing had to change

VMware’s white paper spins it as a good thing.  (no mystery why they did this).  But here is what I think the real reason is:  We’ve seen in the last decade a trend of adding more CPU cores per socket.  What started out as a dual core movement is now in less than 10 years at 12 cores per socket.  Imagine the next few years:  24 cores per socket?  64 cores per socket?  128 cores per socket?  Not all that hard to imagine.  Also consider the memory DIMMs:  16GB DIMMs, 32 GB DIMMS (those are already here!)  64 GB DIMMs… you get the idea.

So what does this mean for a software company like VMware that licenses per CPU socket?  It means that in several years, people won’t need as many CPU socket license because they’ll be able to put more VMs on a physical host, so they’ll need less socket licenses.  That means less money for VMware.

VMware like any business is in business to make money, and having profits eroded is not a good thing.  So they decided to take action now and make it so that innovations in physical hardware do not erode their earnings.

Who is this change bad for?

This is bad for customers right now who have 2 CPU socket systems with greater than 96GB of memory.  Hardware vendors like IBM and Cisco have been showing customers how they can save licensing costs on servers by adding more memory to existing 2 socket servers.  For example:

A system like the Cisco B250 which allows up to 384 GB of RAM will now require you pay for 8 licenses instead of 2. This is a huge penalty!

A system like the IBM HX5 which allows up to 256GB of RAM in a single wide blade with two CPU sockets will require  you to pay for 6 licenses instead of 2.

This is also bad for people that over allocate Virtual memory.  Its common practice for people to overcommit memory.   Since you’re paying for vRAM this means that you may be potentially paying more.  This is funny because VMware in the past has said overcommitting memory is one of the good reasons to move to virtualization.

Its also bad for people with Virtual Machines with large memory.  VMware announced that they can now create a VM with 1TB of memory!  That’s great news, but consider the cost in vRAM:  That’s 21 Enterprise Plus licenses at $3,495 = $73,395!  Probably cheaper to use a real machine for that one?

This is bad for people who use Citrix Xen Desktop with VMware vSphere on the backend.  Its interesting that there is a completely different pricing model for people using VMware View with vSphere.  I think this licensing issue will force people to make a choice:  Go with VMware View or go Xen Desktop with Xen hypervisor on the backend.

The above may not be entirely true.  There is also a new product SKU called vSphere 5 Desktop that allows for 100 VM desktops priced at $6500.  This license supersedes the vSphere license so if you run Desktop VMs then your cost for the higher memory systems like the Cisco B250 would not go up.

Who is the change is good for?

Well for now its good for the standard 2 socket systems that have less than 96GB of RAM like the HP Proliant BL2x220c G7 blade which allows up to 96GB of RAM per server instance will require 2 per server, so 4.  No change here.  Same with the Cisco B200s with 16GB DIMMS which goes up to 96GB with 16GB DIMMS.  The problem is this won’t last for long.  As I mentioned before, RAM and CPU core density will only increase meaning you’ll have to pay more for the licenses.  16GB DIMMs will get cheaper and x86 processors will allow more DIMM slots in the future.  (The Cisco B250 already allows 48 DIMM slots on a 2 socket system).

The VMware marketing literature states that the vRAM can be pooled.  So if one system in your datacenter has only 16GB of RAM on a 2 socket system, then you’ll get all kinds of vRAM that you can use on other systems that may have more memory.

A proposed better solution

I’m not opposed to VMware making more money.  As physical server capabilities increase, VMware wants to get more money out of its utilization.  Even though the prices of physical servers will most likely remain the same, it appears VMware software will scale with the number of VMs deployed.

I would propose that there be vRAM entitlements decoupled from the CPU cores that cost less money.  For example, instead of paying for 8 vSphere licenses for a Cisco B250, why not make us pay for 2 vSphere 5 licenses plus 6 48GB vRAM entitlements that are priced at a different rate?  Make these vRents (vRAM entitlements) 1/4th the cost of the socket license?  Here’s the analysis of the B250:

vSphere 4.1 license:  2 licenses: $6,990 (2x$3495)

vSphere 5.0 license: 8 licenses: $27,960 (8x$3495)

my proposal:  vSphere 5.0 license + vRents: $12,233 = $6,990 (2x$3495) + $5,243 (6x$873)

I don’t like that its double the cost of the previous vSphere 4.1 but at least its better than the 400% increase of what we had before.

The Future?

VMware has changed pricing models before.  Just ask my friends in the Data center space where they now have to pay for hosting solutions.  But VMware is so far ahead of anyone else right now, its hard to know what type of backlash they will get.  Sure people will grumble when they need to pay more, but most likely people will just pay it.  So Hyper-V and Xen and KVM, what do you have to say to all of this?  Its your move!

VMworld 2011 Sessions Posted

I am super excited for VMworld 2011 and I’ve already got my tickets, hotel (at the Venetian!) and dates ready to roll!  I was scrolling through the long list of sessions and found tons that seem really interesting to me.

I submitted two sessions:

#2815 “Secrets of ESXi Automated Installations”

and

#2642 “Pimp my ESXi Kickstart Install”

They both will talk about mainly the same thing:  Getting ESXi onto bare metal and look at some free tools to do it with but the focus will mostly be on the configuration parameters inside the %post and %firstboot sections and how to use them.  We’ll also talk about the mod.tgz secrets that I use to deploy stateless ESXi and how it works when the machine reboots.  I actually didn’t think VMware would approve the title “Pimp my ESXi Kickstart Install” but hey, it got there!  So I guess we’ll see if I have to deliver any of them or not.  Either way, presenting or attending, I will enjoy attending VMworld 2011.

So if you don’t vote for me there were some others that looked pretty cool.  #l1940 10 Best Free Tools for vSphere Management in 2011 I’m sure will make the list.  (And I’m bummed that they’ll probably not look at xCAT which to me has more power behind it than most tools… the problem is its too undocumented… )

1956 The ESXi Quiz show is something I’m sure I’ll enjoy. and I’ll probably attend a few on power cli to see what the fuss is about.  (I’m a Linux guy so this shows how open minded I am.) and #2964 looks really good “Building products on top of VMware”.  I think I’d demand they present it if they rename it “Build products on top of VMware so that VMware will want to buy your company”.

Anyway, happy voting!

Virtual Machine Song

We wrote a song called “Virtual Machine” that can be freely downloaded here. (right click to download)

Some of my friends and I have been writing and performing songs for over 20 years.  Occasionally we’ll get together and work on something even though I’m separated from them by about 1,000 miles.  This song we had sitting around for a while and we didn’t get a chance to redo the vocals and add a bunch of other stuff we had originally planned, but I figured just get it out there and we’ll move on to the next hit song.

I’d love to play this song live along with a few other cloud/virtual songs we’ve got on the back burner at VMworld 2011.  Even better if we could do it on stage with Vittorio Viarengo.  But for now I’m just happy that its complete enough to listen to.

The song is probably too tech heavy but I think its still fun.  Here’s the silly lyrics:

I am a virtual machine, an instance of an operating system.  Just a program running a program.  To enable all your applications.

I’m a thin clone of the virtual machine and so we are tightly tethered.  We share the same disk image and write changes to a delta file

I’m a thick clone of a virtual machine I’m an independent image we run together on the same hardware and provide maximum utilization.

Baby you can try my virtual machine all you need is a hypervisor.  You can run it in the cloud or in the data center baby you could use some virtual love

I don’t want to disrespect your physical machine I just want to make you lots of money.  You’ve been idling all the time acting so inefficiently, baby you could use some virtual love.

We are virtual machines!  We can stop and suspend…

We can migrate to other systems or save states with snapshot files.

You’ve got lots of problems:  9 nines and system failures.  You got SAAS and PAAS to deliver on an SLA with penalty clauses.  You need virtual machines!  We can solve your desktop issues.  We can reduce your capital expenses and consolidate your infrastructure.

Baby you can try my virtual machine all you need is a hypervisor, you can run it in the cloud or in your datacenter baby you could use some virtual love.

I don’t want to disrespect your physical machine I just want to make you lots of money.  (I’m a free bit baby!)  You’ve been idling all the time acting so inefficiently baby you could use some virtual love…

Maybe you could take this virtual machine and shove it up inside your datacenter.  You can consolidate the system on commodity hardware baby you could use some virtual love…

I don’t want to disrespect your physical machine, I just want to make you lots of money.  You’ve been idling all the time acting too inefficiently, baby you got to use this virtual love…

ESXi 4.1 and HP BL460c G6 with Mezzenine card

Had an issue where I would install CentOS on these HP blades and I would be able to see 16 nics.  But when I installed ESXi 4.1 I only saw 8 nics.  16 is the right number because each flexNIC has 4 vNics.  So with 4 of these, I wanted to see some serious bandwidth.  After fumbling around we finally came to the conclusion that the be2net driver was not loaded on the hypervisor.

My Mezzanine card is a HP NC550m Dual Port Flex-10 10GbE BL-c Adapter.  My HP rep said that these were not going to be supported by HP on ESXi 4.1 until November and that I could drop back to 4.0 or he could try to get me some beta code.

I found that you can just download the driver here.  I tried a similar route by installing the hp-esxi4.1uX-bundle from HPs website but that just gave me stuff I didn’t need (like iLo drivers).

The link above is an ISO image.  The easiest way for me to install it on a running machine was to open the ISO on a linux machine and then copy the files to the ESX hosts:

Then you just need to install it.  The only problem with this is that it involves a entering maintenance mode and then a reboot.  Is this windows xp or something?  We’re just talking about a driver here…

Anyway, SSH to the ESXi 4.1 (or use VUG if you want to pay $500 bucks instead).  Since I use xCAT, I have passwordless SSH set up:

After the node reboots you can run:

you’ll be able to see all 16 nics.

Hope that saves you time as it took me a while to figure this out…

My next post will talk about how to integrate this into the kickstart file so you don’t have to do any after-the-install junk.