Category Archives: Nexus 1000v

Distributed Data Centers

My thoughts on what cloud computing and the future of the data center has changed a bit in the last 3 years.  When I first started working on a cloud computing project for a large bank in America back in 2008 I was convinced that soon every enterprise would create their own private cloud and use xCAT (or something).  Then I thought they would instead all use OpenStack.  But I figured every organization would indeed build its own private cloud.  This has not panned out.  Not even close and its 6 years later.

Eventually, I thought, all enterprises would migrate to one public cloud provider, and it never occurred to me that people would see fit to use more than one public cloud provider.   I did form a concept of the InterCloud back then so I’m not too far off the mark.  But my vision is evolving and becoming more clear.  I finally see where IT is going.  (Or at least I think I do)

In my small sector of the world hardly anybody has a private cloud.  And when I say private cloud, I mean self service portals with completely automated provisioning.  Yeah, that’s just not happening.    The truth is, I don’t think it will for most organizations.  There’s not enough need there.  The only people that need VMs in a self service portal for most organizations are the VMware admins themselves and they are savvy enough to right click and make that happen without all your bloated self provisioning tools, thank you very much.

What I am seeing is that more and more are going to the public cloud.  This started out more as a shadow IT initiative, but more of the people I work with have in fact embraced it at central IT.   But its managed as a one off and people are still trying to figure it out.  People aren’t ditching their own data centers, and just like they’re not ditching their mainframes, in large enterprises there will always be some footprint on premise for IT services.

The other thing that seems completely obvious now is that people will want to use more than one public cloud provider.  The reason being some public clouds specialize in different things.  For example:  I might run Exchange/Office 365 on Azure, but I might run some development applications on AWS.  Similarly, I might have a backup as a service contract with SunGuard.  But I may not trust my data to anyone but my own 6 node Oracle RAC cluster that’s sitting in my very own datacenter.  Can you see where this leads us?

Central IT is now responsible for sourcing workloads.  The data center is distributed.  My organization’s data is all over the place.  My problem now is managing the sprawl.  Getting visibility to where the sprawl is and making sure I’m using it most effectively.

Another misconception I see is that people think using two or more public clouds  means VMs move between data centers.  Today, that’s pretty impractical.  Migrating VMs between data centers takes too long, even if the network problems weren’t a problem.  And besides, when you think that way, you are thinking more about pets in your data center instead of cattle like the future of applications is.  So forget about that right now.

Instead, focus on the real issue that needs to be solved.  And this is where I think Cisco can make big things happen.  That is:  How do you connect distributed data centers?

The Nexus 1000v InterCloud, or InterCloud Fabric I think is what Cisco is calling it now starts down this road.   It allows us to communicate with VMs in a public cloud with our own cloud using our same layer 2 address schema.  This is pretty cool, and a good start, but we’ll need more.  For example:  We might have our data base servers reside in our own data center.  (No self service portal here). Then we’ll develop apps that will be hosted in public clouds.  The application servers will need to communicate with each other and with the database.  The different applications may be in different clouds.  The real issue is how do they talk and communicate effectively, securely, and seamlessly.  That is the big issue that needs to be solved with distributed data centers.

Is this where you think we’re headed?  I feel like for the first time in five years I finally get what’s happening to IT.  So I’ll take comfort in that for now, until things change next month.


1000v in and out of vCenter

I was setting up the Nexus 1110 (aka: virtual service appliance, aka: VSA) with one of our best customers and as we were doing it the appliance rebooted never to come up again without completely reinstalling the firmware from the remote media.  Most of this was probably my fault because I didn’t follow the docs exactly, and I think we can now move forward, but it made me realize I hadn’t written down an important way to reconnect to an orphaned 1000v from a new virtual supervisor module (VSM).
Here’s the situation:  When you lose the 1000v that is connecting into vCenter, there is no way to remove the virtual distributed switch (VDS or DVS) that the 1000v presented to vCenter.  You can remove hosts from the DVS but you can’t get rid of that switch.
In the above picture, there is my DVS.  If I try to remove it, I get the following error:
In my case, I didn’t want to get rid of it, I just wanted to reconnect a new VSM that I created with the same name.  But this operation can be used to remove the 1000v DVS from vCenter as well.
So here’s how you do it:
Adopt an  Orphaned Nexus 1000v DVS
Install a VSM.  I usually do mine manually, so that it doesn’t try to register with vCenter or one of the hosts.  Don’t do any configuration, other than an IP address.  Just get it so that you can log in.  Once you can log in, if you did create an SVS connection you’ll need to disconnect.  In mine, I made an svs connection and called it venter.  To disconnect from vCenter and erase the svs connection run:
# config
# svs connection vcenter
# no connect
# exit
# no svs connection venter
Trivia: What does SVS stand for?  “Service Virtual Switch
Step 2.  Change the hostname to match what is in vCenter
Looking at the error picture above, you can see there is a folder named nexus1000v with a DVS named nexus1000v.  To make vCenter think that this new 1000v is the same one, we need to change the name to match what is in vCenter
nexus1000v-a(config)# conf
nexus1000v-a(config)# hostname nexus1000v
Step 3.  Build SVS Connection
Since we destroyed (or never built) the SVS connection in step 1, we’ll need to build one and try to connect.  The SVS connection should have the same name as the one you created when you first made you SVS.  So if you called your SVS ‘vCenter’, or ‘VCENTER’, or ‘VMware’ then you’ll need to name it the same thing.  I named mine ‘vcenter’ so that’s what I use.  Similarly, you’ll have to create the datacenter-name the same as what you had before.
nexus1000v(config)# svs connection vcenter
nexus1000v(config-svs-conn)# remote ip address port 80
nexus1000v(config-svs-conn)# vmware dvs datacenter-name Lucky Lab
nexus1000v(config-svs-conn)# protocol vmware-vim
nexus1000v(config-svs-conn)# max-ports 8192
nexus1000v(config-svs-conn)# admin user n1kUser
nexus1000v(config-svs-conn)# connect
ERROR:  [VMware vCenter Server 5.0.0 build-455964] Cannot create a VDS of extension key Cisco_Nexus_1000V_1169242977 that is different than that of the login user session Cisco_Nexus_1000V_125266846. The extension key of the vSphere Distributed Switch (dvsExtensionKey) is not the same as the login session’s extension key (sessionExtensionKey)..
Notice that when I tried to connect I got an error.  This is because the extension key in my Nexus 1000v (that was created when it was installed) doesn’t match what the old one is.  The nice thing, is I can actually change that, and that is how I make this new 1000v take over the other one.

Step 4.  Change the extension key to match what is in vCenter.
To see what the current extension-key is (or the offending key is) run the following command:
nexus1000v(config-svs-conn)# show vmware vc extension-key
Extension ID: Cisco_Nexus_1000V_125266846
That is the one we need to change.  You can see the extension-key that vCenter wants from the error message we saw in the previous step.  In the previous step it showed that the extension key we wanted was ‘Cisco_Nexus_1000V_1169242977′.  So we need to make our extension-key on the 1000v match that.  No problem:
nexus1000v(config-svs-conn)# no connect
nexus1000v(config-svs-conn)# exit
nexus1000v(config)# no svs connection vcenter
nexus1000v(config)# vmware vc extension-key Cisco_Nexus_1000V_1169242977

Now we should be able to connect and run things as before.

Step 5. (Optional) Remove the 1000v

If you’re just trying to remove the 1000v because you had that orphaned one sitting around, we simply disconnect now from vCenter

nexus1000v(config)# svs connection vcenter
nexus1000v(config-svs-conn)# no connect
nexus1000v(config-svs-conn)# connect
nexus1000v(config-svs-conn)# no vmware dvs
This will remove the DVS from the vCenter Server and any associated port-groups. Do you really want to proceed(yes/no)? [yes] yes

Now, the orphaned Nexus 1000v is gone. If you want to remove it from your vCenter plugins then you will have to navigate the managed object browser and remove the extension key. Not a big deal. By opening a web browser to the host that manages vCenter (e.g.: ) then you can “Browse objects managed by vSphere”. From there go to “content” then “Extension Manager”. To unregister the 1000v plugin, select “UnregisterExtension” and enter in the vCenter Extension key. This will be the same extension key that you used in step 4. (In our example: Cisco_Nexus_1000V_1169242977 )

Hope that helps!

Quick SPAN with the Nexus 1000v

Today I thought I’d take a look at creating a SPAN session on the 1000v to monitor traffic.  I found it really easy to do!  SPAN is one of those things that takes you longer to read and understand than to actually configure.  I find that true with a lot of Cisco products:  Fabric Path, OTV, LISP, etc.

SPAN is “Switched Port Analyzer”.  Its basically port monitoring.  You capture the traffic going from one port and then mirror it on another.  This is one of the benefits you get out of the box for the 1000v that enables the network administrator not to have this big black box of VMs.

To follow the guide, I installed 3 VMs.  iperf1, iperf2, and xcat.  The idea was I wanted to monitor traffic between iperf1 and iperf2 on the xcat virtual machine.

On the xcat virtual machine I created a new interface and put it in the same VLAN as the other VMs.  These were all on my port-profile called “VM Network”.  I created it like this:

vlan 5
port-profile type vethernet “VM Network”
vmware port-group
switchport mode access
switchport access vlan 510
no shutdown
state enabled

Then, using vCenter I edited the VMs to assign them to that port group. (Remember: VMware Port-Group = Nexus 1000 Port-Profile)

On the Nexus 1000v Running the command:

# sh interface virtual

Port Adapter Owner Mod Host
Veth1 vmk3 VMware VMkernel 4
Veth2 vmk3 VMware VMkernel 3
Veth3 Net Adapter 1 xCAT2 3
Veth4 Net Adapter 2 iPerf2 3
Veth5 Net Adapter 3 xCAT 3
Veth6 Net Adapter 2 iPerf1 3

Allows me to see which vethernet is assigned to which VM. In this SPAN session, I decided I wanted to monitor the traffic coming out of iPerf1 (Veth6) on the xCAT VM (veth5).
No problem:

Create The SPAN session

To do this, we just configure a SPAN session:

n1kv221(config-monitor)# source interface vethernet 6 both
n1kv221(config-monitor)# destination interface vethernet 5
n1kv221(config-monitor)# no shutdown

As you can see from above, I’m monitoring both received and transmitted packets from vethernet 6( iPerf1). Then those packets are being mirrored to vethernet 5 (xCAT). If you have an IP address on xCAT (vethernet 5) you’ll find you can no longer ping it. The port is in span mode. Notice also that by default the monitoring session is off. You have to turn it on.

Now we want to check things out:

n1kv221(config-monitor)# sh monitor
Session State Reason Description
——- ———– ———————- ——————————–
1 up The session is up
n1kv221(config-monitor)# sh monitor session 1
session 1
type : local
state : up
source intf :
rx : Veth6
tx : Veth6
both : Veth6
source VLANs :
rx :
tx :
both :
source port-profile :
rx :
tx :
both :
filter VLANs : filter not specified
destination ports : Veth5
destination port-profile :

Now, you’ll probably want to monitor the port right? I just installed wireshark on my xcat vm. (Its linux, yum -y install wireshark and ride). To watch from the command line I just ran the command:

root@xcat ~]# tshark -D
1. eth0
2. eth1
3. eth2
4. eth3
5. any (Pseudo-device that captures on all interfaces)
6. lo

This gives me the interfaces. By matching the MAC addresses, I can see that eth2 (or device 3 from the wireshark output) is the one that I have on the Nexus 1000v.

From here I run:

[root@xcat ~]# tshark -i 3 -R “eth.dst eq 00:50:56:9C:3B:13″
0.000151 -> ICMP Echo (ping) reply
1.000210 -> ICMP Echo (ping) reply
2.000100 -> ICMP Echo (ping) reply

Then I get a long list of fun stuff to monitor. By pinging between iperf1 and iperf2 I can see all the traffic that goes on. Since there was nothing else on this VLAN it was pretty easy to see. Hopefully this helps me or you troubleshoot down the road.

Nexus 1000v – A kinder gentler approach

One of the issues skeptical Server Administrators have with the 1000v is that they don’t like the management interface being subject to a virtual machine.  Even though the 1000v can be configured so that if the VSM gets disconnected/powered-off/blownup the system ports can still be forwarded.  But that is voodoo.  Most say:  Give me a simple access port so I can do my business.

I’m totally on board with this level of thinking.  After all, we don’t want any Jr. Woodchuck network engineer to be taking down our virtual management layer.  So let’s keep it simple.

In fact!  You may not want Jr. Woodchuck Networking engineer to be able to touch your production VLANs for your production VMs.  Well, here’s a solution for you:  You don’t want to do the networking, but you don’t want the networking guy to do the networking either.  So how can we make things right?  Why not just ease into it.  The diagram below, presents, the NIC level of how you can configure your ESXi hosts:

Here, is what is so great about this configuration.  The VMware administrator can use things “business as usual” with the first 6 NICs.

Management A/B teams up with vmknic0 with IP address  This is the management interface and used to talk to vCenter.  This is not controlled by the Nexus 1000v.  Business as usual here.

IP Storage A/B teams up with vmknic1 with IP address This is to communicate with storage devices (NFS, iSCSI).  Not controlled by Nexus 1000v.  Business as usual.

VM Traffic A/B team up.  This is a trunking interface and all kinds of VLANs pass through here.  This is controlled either by a virtual standard switch or using VMware’s distributed Virtual Switch.  Business as usual.  You as the VMware administrator don’t have to worry about anything a Jr. Woodchuck Nexus 1000v administrator might do.

Now, here’s where its all good.  With UCS you can create another vmknic2 with IP address  This is our link that is managed by the Nexus 1000v.  In UCS we would configure this as a trunk port with all kinds of VLANs enabled over it.  This can use the same VNIC Template that the standard VM-A and VM-B used.  Same VLANs, etc.

(Aside:  Some people would be more comfortable with 8 vNICs, Then you can do vMotion over its own native VMware interface.  In my lab this is

The difference is that this IP address belongs on our Control & Packet VLAN.  This is a back end network that the VSM will communicate with the VEM over.  Now, the only VM kernel interface that we need to have controlled by the Nexus 1000v is the IP address.  And this is isolated from the rest of the virtualization stack.  So if we want to move a machine over to the other virtual switch, we can do that with little problem.  A simple edit of the VMs configuration can change it back.

Now, the testing can coexist on a production environment because the VMs that are being tested are running over the 1000v.  Now you can install the VSG, DCNM, the ASA 1000v, and all that good vPath stuff, and test it out.

From the 1000v, I created a port profile called “uplink” that I assign to these two interfaces:

port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,501-512
channel-group auto mode on mac-pinning
no shutdown
system vlan 505
state enabled

By making it a system VLAN, I make it so that this control/packet VLAN stays up. For the vmknic ( I also created a port profile for control:

port-profile type vethernet L3-control
capability l3control
vmware port-group
switchport mode access
switchport access vlan 505
no shutdown
system vlan 505
state enabled

This allows me to migrate the vmknic over from being managed by VMware to being managed by the Nexus 1000v. My VSM has an IP address on the same subnet as vCenter (even though its layer 3)

n1kv221# sh interface mgmt 0 brief

Port VRF Status IP Address Speed MTU
mgmt0 — up 1000 1500

Interestingly enough, when I do the sh module vem command, it shows up with the management interface:

Mod Server-IP Server-UUID Server-Name
— ————— ———————————— ——————–
3 00000000-0000-0000-cafe-00000000000e
4 00000000-0000-0000-cafe-00000000000f

On the VMware side, too, it shows up with the management interface:

Even though I only migrated the vmknic over.

This configuration works great.  It provides a nice opportunity for the networking team to get with it and start taking back control of the access layer.  And it provides the VMware/Server team a clear path to move VMs back to a network they’re more familiar with if they are not yet comfortable with the 1000v.

Let me know what you think about this set up.

A Nexus 1000v Value Proposition

There’s a lot of collateral about why the Nexus 1000v would be a good thing to have in your virtual environment.  When I talk to people about it one of my first questions is:

“Who manages the virtual networking environment in the data center?”

Most of the time its the virtual machine administrators.  Its usually not the networking team. Typically the network team stops at the physical access layer and anything on the server is the responsibility of the server administrator (which is also the VM administrator)

If the shop is big enough, the second question I usually ask is:

“Would you like the networking team to manage the virtual networking environment?”

Most of the time this is greeted enthusiastically.  After all, the networking team has to troubleshoot the VMware environment anyway.  Why not just give them control of it?  That’s one less problem the virtual administrative team has to deal with.

That’s one of the best benefits of the Nexus 1000v.  Those old lines of demarkations are back.  And the cool thing?  Network visibility is back with a consistent command line.

Here’s a video I made to show this line of demarkation in action

Unfortunately, I’m not very coherent in the video but I hope you get the idea.  Also, sorry for the command line not being visible while I’m typing.  Hopefully I’ll get better with this in time.

Nexus 1000v Layer 3

Layer 3 mode is the recommended way to configure VSM to VEM communication in the Nexus 1000v.   Layer 3 mode keeps things simple and easier to troubleshoot.

I kept my design very simple.  There’s one VLAN (509) that I run my ESXi hosts on.  The IP addresses are  Just to give you an example:

ESXi Host1:
ESXi Host2:


Using this I had a simple uplink port-profile defined:

And a simple management port-profile:

I had everything set up right… I thought.  The only problem was (before, not in the output above) is that I couldn’t see my VEMs! They were all hooked up in vCenter and I was even running traffic through them. But no VEMs:

I finally stumbled upon this nice document and realized I hadn’t enabled l3control.  Doing that:

And Bam!  Everything worked: