{"id":698,"date":"2013-02-15T12:43:50","date_gmt":"2013-02-15T18:43:50","guid":{"rendered":"http:\/\/benincosa.com\/blog\/?p=698"},"modified":"2014-11-19T11:24:32","modified_gmt":"2014-11-19T17:24:32","slug":"vifs-in-a-ucs-environment","status":"publish","type":"post","link":"https:\/\/benincosa.com\/?p=698","title":{"rendered":"VIFS in a UCS environment"},"content":{"rendered":"<div>First of all you may be asking if you stumbled upon this page: \u00a0&#8220;What is a VIF?&#8221;. \u00a0A VIF is a Virtual interface. \u00a0In UCS, its a virtual NIC.<\/div>\n<div><\/div>\n<div>Let&#8217;s first examine a standard rack server. \u00a0Usually you have 2 ethernet ports on the mother board itself. \u00a0Now days, the recent servers like the <a href=\"http:\/\/www.cisco.com\/en\/US\/prod\/collateral\/ps10265\/ps10493\/ps12370\/data_sheet_c78-700629.html\">C240 M3<\/a> have 4 x 1GbE onboard interfaces. \u00a0Some servers even have 2x10GbE onboard NICs. \u00a0That&#8217;s all well and good and easy to understand because you can see it physically.<\/div>\n<div><\/div>\n<div>Now let&#8217;s look at a UCS blade. \u00a0You can&#8217;t really see the interfaces because there are no RJ-45 cables that connect to the server. \u00a0Its all internal. \u00a0If you could see it physically, then you&#8217;d see that you could add up to 8x10Gb physical NICs per half width blade. \u00a0Just like a rack mount server comes with a fixed amount of PCI slots, a blade has built in limits as well. \u00a0But Cisco blades work a little different. \u00a0Really, there are 2 sides: \u00a0Side A and Side B, each with up to 4x10GbE physical connections. \u00a0And those 4x10GbE are port channeled together, so it looks like one big pipe depending on what cards you put in there.<\/div>\n<div><\/div>\n<div>With these two big pipes (that are between two 10Gb and two 40Gb) we create virtual interfaces over these that are presented to the operating system. \u00a0That&#8217;s what a VIF is. \u00a0These VIFs can be used for some really interesting things.<\/div>\n<div><\/div>\n<div><strong>VIF Use Cases<\/strong><\/div>\n<div>\n<ol>\n<li>It can be used to present NICs to the operating system. \u00a0This makes it so that the operating system thinks it has a TON of real NICs. \u00a0The most I&#8217;ve ever seen though is 8 NICs and 2 Fibre Channel adapters. \u00a0(Did I mention that Fibre Channel counts as a VIF?) \u00a0So 10 is probably the most you would use with this configuration.<\/li>\n<li>It can be used to directly attach virtual machines with a UCS DVS. \u00a0This is also one version of VM-Fex. \u00a0Here, UCS Manager acts as the Virtual supervisor and the VMs get real hardware for their NICs. \u00a0They can do vMotion and all that good stuff and remain consistent. \u00a0I don&#8217;t see too many people using this, but the performance is supposed to be really good.<\/li>\n<li> It can be used for VMware DirectPath IO. \u00a0This is where you tie the VM directly to the hardware using VMware DirectPath IO bypass method. \u00a0(Not the same as the UCS Distributed Virtual Switch I mentioned above.) \u00a0The advantage UCS has is that \u00a0you typically cannot do vMotion when you do VMware DirectPath IO. \u00a0With UCS, you can!<\/li>\n<li>USNIC (future!!!) \u00a0Unified NIC is where we can present one of these virtual interfaces directly to user space and create a low latency connection in our application. \u00a0This is something that will be enabled in the future on UCS, but it means we dynamically create these and can hopefully get latencies around 2-3 microseconds. \u00a0This is great for HPC apps and I can&#8217;t wait to get performance data on this.<\/li>\n<li>USNIC in VMs. \u00a0(future!!!) \u00a0This is where a user space application running in a VM will have the same latency as a physical machine. \u00a0That&#8217;s right. \u00a0This is where we really get VMs doing HPC low latency connections.<\/li>\n<\/ol>\n<\/div>\n<div>So now that we know the use cases, how can you tell how many virtual interfaces or VIFs you have for each server? \u00a0Well, it depends on the hardware and the software. \u00a0You see, they all allow for growth, but some instances have limitations. \u00a0So that&#8217;s what I&#8217;m hoping to explain below.<\/div>\n<div><\/div>\n<div><strong>UCS Manager Limitations and Operating Systems Limitations<\/strong><\/div>\n<div>For 2.1 this is found <a href=\" http:\/\/www.cisco.com\/en\/US\/docs\/unified_computing\/ucs\/sw\/configuration_limits\/2.1\/b_UCS_Configuration_Limits_2_1.html#reference_0BE56D8916744A39A75C004B3EB411AF\">here<\/a>. \u00a0For other versions of UCS manager, just search for &#8220;UCS 2.x configuration limits&#8221;.<\/div>\n<div><\/div>\n<div>The Maximum VIFS per UCS domain today is\u00a0<strong>2,000<\/strong><\/div>\n<div><strong><br \/>\n<\/strong><\/div>\n<div>The document above also shows that for ESX 5.1 its\u00a0<strong><em>116<\/em><\/strong> per host. \u00a0The document references UPT and PTS.<\/div>\n<div>UPT &#8211; Uniform Pass Thru (this is configured in VMware with direct Path IO, use case 3 as I mentioned above)<\/div>\n<div>PTS &#8211; Pass through Switching (this is UCS DVS, or use case 2 as I mentioned above)<\/div>\n<div><\/div>\n<div><strong>Fabric Interconnect VIF Perspective<\/strong><\/div>\n<div>Let&#8217;s look at it from a hardware perspective. \u00a0The ASICs used on the Fabric Interconnects determine the limits as well.<\/div>\n<div><\/div>\n<div><strong>6200<\/strong><\/div>\n<div>The UCS Fabric Interconnect 6248 uses the &#8220;Carmel&#8221; Unified Port Controller. \u00a0There is 1 &#8220;Carmel&#8221; port ASIC for every 8 ports. \u00a0So ports 1-8 are part of the first Carmel ASIC, etc. \u00a0In general, you want the FEX (or IO Module) connected to the same Carmel.<\/div>\n<div>Each Carmel ASIC allows 4096 VIFs which are equally divided into all 8 switch ports. \u00a0Therefore, 512 VIFS per port. \u00a0Since one of those VIFs is dedicated to the CIMC, that gives 511 VIFS per port. \u00a0Consider that there are 8 slots in each chassis, so you would further divide that up between the 8 blade slots, so that&#8217;s 64 max in each slot. \u00a0Some are reserved, so it ends up being 63 VIFs per slot. That&#8217;s why the equation ends up being 63*n &#8211; 2 (2 are used for management)<\/div>\n<div>Cisco Fabric Interconnect 6200<\/div>\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\" width=\"240\">\n<tbody>\n<tr height=\"15\">\n<td width=\"107\" height=\"15\">Uplinks Per FEX<\/td>\n<td width=\"133\">Number of VIFs per slot<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">1<\/td>\n<td>61<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">2<\/td>\n<td>124<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">4<\/td>\n<td>250<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">8<\/td>\n<td>502<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div><\/div>\n<div><strong>6100 <\/strong><\/div>\n<div>The 6100 uses the Gatos port controller ASIC. \u00a0There are 4 ports managed per Gatos ASIC.<\/div>\n<div>Each Gatos ASIC allows 512 VIFs or 128 VIFS per port. \u00a0(512 VIFs per ASIC \/ 4 ports). \u00a0Each of those 4 ports gets divided by the 8 slots. \u00a0So, 128 \/ 8 = 16. \u00a0However, some of those are reserved, so it ends up being only 15 VIFs per slot. \u00a0 That&#8217;s why the equation of VIFs per server is 15*n &#8211; 2 \u00a0(the 2 are used for management)<\/div>\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\" width=\"215\">\n<colgroup>\n<col width=\"91\"><\/col>\n<col width=\"124\"><\/col>\n<\/colgroup>\n<tbody>\n<tr height=\"16\">\n<td colspan=\"2\" width=\"215\" height=\"16\">Cisco Fabric Interconnect 6100<\/td>\n<\/tr>\n<tr height=\"16\">\n<td height=\"16\">Uplinks per FEX<\/td>\n<td>Number of VIFS per slot<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">1<\/td>\n<td>13<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">2<\/td>\n<td>28<\/td>\n<\/tr>\n<tr height=\"15\">\n<td height=\"15\">4<\/td>\n<td>58<\/td>\n<\/tr>\n<tr height=\"16\">\n<td height=\"16\">8<\/td>\n<td>118 (obviously requres 2208)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div><\/div>\n<div><strong>VIFs from the Mezz Card Perspective<\/strong><\/div>\n<div>The M81KR card supports up to 128 VIFs. \u00a0So you can see from above that with the 6100 and 2104\/2204\/2208 its not the bottle neck.<\/div>\n<div>The VIC 1280 which can be placed into the M1 and M2 servers can do up to 256 VIFs.<\/div>\n<div><\/div>\n<div>Hopefully that clarified VIFs a little and where the bottle necks are. \u00a0Its important to note as well that I\/O modules don&#8217;t limit VIFs. \u00a0They&#8217;re just passthrough devices.<\/div>\n","protected":false},"excerpt":{"rendered":"<p>First of all you may be asking if you stumbled upon this page: \u00a0&#8220;What is a VIF?&#8221;. \u00a0A VIF is a Virtual interface. \u00a0In UCS, its a virtual NIC. Let&#8217;s first examine a standard rack server. \u00a0Usually you have 2 ethernet ports on the mother board itself. \u00a0Now days, the recent servers like the C240&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[992],"tags":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/698"}],"collection":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=698"}],"version-history":[{"count":4,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/698\/revisions"}],"predecessor-version":[{"id":2764,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/698\/revisions\/2764"}],"wp:attachment":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=698"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=698"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=698"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}