VIFS in a UCS environment

First of all you may be asking if you stumbled upon this page:  “What is a VIF?”.  A VIF is a Virtual interface.  In UCS, its a virtual NIC.
Let’s first examine a standard rack server.  Usually you have 2 ethernet ports on the mother board itself.  Now days, the recent servers like the C240 M3 have 4 x 1GbE onboard interfaces.  Some servers even have 2x10GbE onboard NICs.  That’s all well and good and easy to understand because you can see it physically.
Now let’s look at a UCS blade.  You can’t really see the interfaces because there are no RJ-45 cables that connect to the server.  Its all internal.  If you could see it physically, then you’d see that you could add up to 8x10Gb physical NICs per half width blade.  Just like a rack mount server comes with a fixed amount of PCI slots, a blade has built in limits as well.  But Cisco blades work a little different.  Really, there are 2 sides:  Side A and Side B, each with up to 4x10GbE physical connections.  And those 4x10GbE are port channeled together, so it looks like one big pipe depending on what cards you put in there.
With these two big pipes (that are between two 10Gb and two 40Gb) we create virtual interfaces over these that are presented to the operating system.  That’s what a VIF is.  These VIFs can be used for some really interesting things.
VIF Use Cases
  1. It can be used to present NICs to the operating system.  This makes it so that the operating system thinks it has a TON of real NICs.  The most I’ve ever seen though is 8 NICs and 2 Fibre Channel adapters.  (Did I mention that Fibre Channel counts as a VIF?)  So 10 is probably the most you would use with this configuration.
  2. It can be used to directly attach virtual machines with a UCS DVS.  This is also one version of VM-Fex.  Here, UCS Manager acts as the Virtual supervisor and the VMs get real hardware for their NICs.  They can do vMotion and all that good stuff and remain consistent.  I don’t see too many people using this, but the performance is supposed to be really good.
  3. It can be used for VMware DirectPath IO.  This is where you tie the VM directly to the hardware using VMware DirectPath IO bypass method.  (Not the same as the UCS Distributed Virtual Switch I mentioned above.)  The advantage UCS has is that  you typically cannot do vMotion when you do VMware DirectPath IO.  With UCS, you can!
  4. USNIC (future!!!)  Unified NIC is where we can present one of these virtual interfaces directly to user space and create a low latency connection in our application.  This is something that will be enabled in the future on UCS, but it means we dynamically create these and can hopefully get latencies around 2-3 microseconds.  This is great for HPC apps and I can’t wait to get performance data on this.
  5. USNIC in VMs.  (future!!!)  This is where a user space application running in a VM will have the same latency as a physical machine.  That’s right.  This is where we really get VMs doing HPC low latency connections.
So now that we know the use cases, how can you tell how many virtual interfaces or VIFs you have for each server?  Well, it depends on the hardware and the software.  You see, they all allow for growth, but some instances have limitations.  So that’s what I’m hoping to explain below.
UCS Manager Limitations and Operating Systems Limitations
For 2.1 this is found here.  For other versions of UCS manager, just search for “UCS 2.x configuration limits”.
The Maximum VIFS per UCS domain today is 2,000

The document above also shows that for ESX 5.1 its 116 per host.  The document references UPT and PTS.
UPT – Uniform Pass Thru (this is configured in VMware with direct Path IO, use case 3 as I mentioned above)
PTS – Pass through Switching (this is UCS DVS, or use case 2 as I mentioned above)
Fabric Interconnect VIF Perspective
Let’s look at it from a hardware perspective.  The ASICs used on the Fabric Interconnects determine the limits as well.
6200
The UCS Fabric Interconnect 6248 uses the “Carmel” Unified Port Controller.  There is 1 “Carmel” port ASIC for every 8 ports.  So ports 1-8 are part of the first Carmel ASIC, etc.  In general, you want the FEX (or IO Module) connected to the same Carmel.
Each Carmel ASIC allows 4096 VIFs which are equally divided into all 8 switch ports.  Therefore, 512 VIFS per port.  Since one of those VIFs is dedicated to the CIMC, that gives 511 VIFS per port.  Consider that there are 8 slots in each chassis, so you would further divide that up between the 8 blade slots, so that’s 64 max in each slot.  Some are reserved, so it ends up being 63 VIFs per slot. That’s why the equation ends up being 63*n – 2 (2 are used for management)
Cisco Fabric Interconnect 6200
Uplinks Per FEX Number of VIFs per slot
1 61
2 124
4 250
8 502
6100
The 6100 uses the Gatos port controller ASIC.  There are 4 ports managed per Gatos ASIC.
Each Gatos ASIC allows 512 VIFs or 128 VIFS per port.  (512 VIFs per ASIC / 4 ports).  Each of those 4 ports gets divided by the 8 slots.  So, 128 / 8 = 16.  However, some of those are reserved, so it ends up being only 15 VIFs per slot.   That’s why the equation of VIFs per server is 15*n – 2  (the 2 are used for management)
Cisco Fabric Interconnect 6100
Uplinks per FEX Number of VIFS per slot
1 13
2 28
4 58
8 118 (obviously requres 2208)
VIFs from the Mezz Card Perspective
The M81KR card supports up to 128 VIFs.  So you can see from above that with the 6100 and 2104/2204/2208 its not the bottle neck.
The VIC 1280 which can be placed into the M1 and M2 servers can do up to 256 VIFs.
Hopefully that clarified VIFs a little and where the bottle necks are.  Its important to note as well that I/O modules don’t limit VIFs.  They’re just passthrough devices.

Comments are closed.