Category Archives: Nexus 1000v

1000v in and out of vCenter

I was setting up the Nexus 1110 (aka: virtual service appliance, aka: VSA) with one of our best customers and as we were doing it the appliance rebooted never to come up again without completely reinstalling the firmware from the remote media.  Most of this was probably my fault because I didn’t follow the docs exactly, and I think we can now move forward, but it made me realize I hadn’t written down an important way to reconnect to an orphaned 1000v from a new virtual supervisor module (VSM).
Here’s the situation:  When you lose the 1000v that is connecting into vCenter, there is no way to remove the virtual distributed switch (VDS or DVS) that the 1000v presented to vCenter.  You can remove hosts from the DVS but you can’t get rid of that switch.
In the above picture, there is my DVS.  If I try to remove it, I get the following error:
In my case, I didn’t want to get rid of it, I just wanted to reconnect a new VSM that I created with the same name.  But this operation can be used to remove the 1000v DVS from vCenter as well.
So here’s how you do it:
Adopt an  Orphaned Nexus 1000v DVS
Install a VSM.  I usually do mine manually, so that it doesn’t try to register with vCenter or one of the hosts.  Don’t do any configuration, other than an IP address.  Just get it so that you can log in.  Once you can log in, if you did create an SVS connection you’ll need to disconnect.  In mine, I made an svs connection and called it venter.  To disconnect from vCenter and erase the svs connection run:
# config
# svs connection vcenter
# no connect
# exit
# no svs connection venter
Trivia: What does SVS stand for?  ”Service Virtual Switch
Step 2.  Change the hostname to match what is in vCenter
Looking at the error picture above, you can see there is a folder named nexus1000v with a DVS named nexus1000v.  To make vCenter think that this new 1000v is the same one, we need to change the name to match what is in vCenter
nexus1000v-a(config)# conf
nexus1000v-a(config)# hostname nexus1000v
nexus1000v(config)#
Step 3.  Build SVS Connection
Since we destroyed (or never built) the SVS connection in step 1, we’ll need to build one and try to connect.  The SVS connection should have the same name as the one you created when you first made you SVS.  So if you called your SVS ‘vCenter’, or ‘VCENTER’, or ‘VMware’ then you’ll need to name it the same thing.  I named mine ‘vcenter’ so that’s what I use.  Similarly, you’ll have to create the datacenter-name the same as what you had before.
nexus1000v(config)# svs connection vcenter
nexus1000v(config-svs-conn)# remote ip address 10.93.234.91 port 80
nexus1000v(config-svs-conn)# vmware dvs datacenter-name Lucky Lab
nexus1000v(config-svs-conn)# protocol vmware-vim
nexus1000v(config-svs-conn)# max-ports 8192
nexus1000v(config-svs-conn)# admin user n1kUser
nexus1000v(config-svs-conn)# connect
ERROR:  [VMware vCenter Server 5.0.0 build-455964] Cannot create a VDS of extension key Cisco_Nexus_1000V_1169242977 that is different than that of the login user session Cisco_Nexus_1000V_125266846. The extension key of the vSphere Distributed Switch (dvsExtensionKey) is not the same as the login session’s extension key (sessionExtensionKey)..
Notice that when I tried to connect I got an error.  This is because the extension key in my Nexus 1000v (that was created when it was installed) doesn’t match what the old one is.  The nice thing, is I can actually change that, and that is how I make this new 1000v take over the other one.

Step 4.  Change the extension key to match what is in vCenter.
To see what the current extension-key is (or the offending key is) run the following command:
nexus1000v(config-svs-conn)# show vmware vc extension-key
Extension ID: Cisco_Nexus_1000V_125266846
That is the one we need to change.  You can see the extension-key that vCenter wants from the error message we saw in the previous step.  In the previous step it showed that the extension key we wanted was ‘Cisco_Nexus_1000V_1169242977′.  So we need to make our extension-key on the 1000v match that.  No problem:
nexus1000v(config-svs-conn)# no connect
nexus1000v(config-svs-conn)# exit
nexus1000v(config)# no svs connection vcenter
nexus1000v(config)# vmware vc extension-key Cisco_Nexus_1000V_1169242977

Now we should be able to connect and run things as before.

Step 5. (Optional) Remove the 1000v

If you’re just trying to remove the 1000v because you had that orphaned one sitting around, we simply disconnect now from vCenter

nexus1000v(config)# svs connection vcenter
nexus1000v(config-svs-conn)# no connect
nexus1000v(config-svs-conn)# connect
nexus1000v(config-svs-conn)# no vmware dvs
This will remove the DVS from the vCenter Server and any associated port-groups. Do you really want to proceed(yes/no)? [yes] yes

Now, the orphaned Nexus 1000v is gone. If you want to remove it from your vCenter plugins then you will have to navigate the managed object browser and remove the extension key. Not a big deal. By opening a web browser to the host that manages vCenter (e.g.: http://10.93.234.91 ) then you can “Browse objects managed by vSphere”. From there go to “content” then “Extension Manager”. To unregister the 1000v plugin, select “UnregisterExtension” and enter in the vCenter Extension key. This will be the same extension key that you used in step 4. (In our example: Cisco_Nexus_1000V_1169242977 )

Hope that helps!

Quick SPAN with the Nexus 1000v

Today I thought I’d take a look at creating a SPAN session on the 1000v to monitor traffic.  I found it really easy to do!  SPAN is one of those things that takes you longer to read and understand than to actually configure.  I find that true with a lot of Cisco products:  Fabric Path, OTV, LISP, etc.

SPAN is “Switched Port Analyzer”.  Its basically port monitoring.  You capture the traffic going from one port and then mirror it on another.  This is one of the benefits you get out of the box for the 1000v that enables the network administrator not to have this big black box of VMs.

To follow the guide, I installed 3 VMs.  iperf1, iperf2, and xcat.  The idea was I wanted to monitor traffic between iperf1 and iperf2 on the xcat virtual machine.

On the xcat virtual machine I created a new interface and put it in the same VLAN as the other VMs.  These were all on my port-profile called “VM Network”.  I created it like this:

conf
vlan 5
port-profile type vethernet “VM Network”
vmware port-group
switchport mode access
switchport access vlan 510
no shutdown
state enabled

Then, using vCenter I edited the VMs to assign them to that port group. (Remember: VMware Port-Group = Nexus 1000 Port-Profile)

On the Nexus 1000v Running the command:

# sh interface virtual

——————————————————————————-
Port Adapter Owner Mod Host
——————————————————————————-
Veth1 vmk3 VMware VMkernel 4 192.168.40.101
Veth2 vmk3 VMware VMkernel 3 192.168.40.102
Veth3 Net Adapter 1 xCAT2 3 192.168.40.102
Veth4 Net Adapter 2 iPerf2 3 192.168.40.102
Veth5 Net Adapter 3 xCAT 3 192.168.40.102
Veth6 Net Adapter 2 iPerf1 3 192.168.40.102

Allows me to see which vethernet is assigned to which VM. In this SPAN session, I decided I wanted to monitor the traffic coming out of iPerf1 (Veth6) on the xCAT VM (veth5).
No problem:

Create The SPAN session

To do this, we just configure a SPAN session:

n1kv221(config-monitor)# source interface vethernet 6 both
n1kv221(config-monitor)# destination interface vethernet 5
n1kv221(config-monitor)# no shutdown

As you can see from above, I’m monitoring both received and transmitted packets from vethernet 6( iPerf1). Then those packets are being mirrored to vethernet 5 (xCAT). If you have an IP address on xCAT (vethernet 5) you’ll find you can no longer ping it. The port is in span mode. Notice also that by default the monitoring session is off. You have to turn it on.

Now we want to check things out:

n1kv221(config-monitor)# sh monitor
Session State Reason Description
——- ———– ———————- ——————————–
1 up The session is up
n1kv221(config-monitor)# sh monitor session 1
session 1
—————
type : local
state : up
source intf :
rx : Veth6
tx : Veth6
both : Veth6
source VLANs :
rx :
tx :
both :
source port-profile :
rx :
tx :
both :
filter VLANs : filter not specified
destination ports : Veth5
destination port-profile :

Now, you’ll probably want to monitor the port right? I just installed wireshark on my xcat vm. (Its linux, yum -y install wireshark and ride). To watch from the command line I just ran the command:

root@xcat ~]# tshark -D
1. eth0
2. eth1
3. eth2
4. eth3
5. any (Pseudo-device that captures on all interfaces)
6. lo

This gives me the interfaces. By matching the MAC addresses, I can see that eth2 (or device 3 from the wireshark output) is the one that I have on the Nexus 1000v.

From here I run:

[root@xcat ~]# tshark -i 3 -R “eth.dst eq 00:50:56:9C:3B:13″
0.000151 192.168.50.151 -> 192.168.50.152 ICMP Echo (ping) reply
1.000210 192.168.50.151 -> 192.168.50.152 ICMP Echo (ping) reply
2.000100 192.168.50.151 -> 192.168.50.152 ICMP Echo (ping) reply
..

Then I get a long list of fun stuff to monitor. By pinging between iperf1 and iperf2 I can see all the traffic that goes on. Since there was nothing else on this VLAN it was pretty easy to see. Hopefully this helps me or you troubleshoot down the road.

Nexus 1000v – A kinder gentler approach

One of the issues skeptical Server Administrators have with the 1000v is that they don’t like the management interface being subject to a virtual machine.  Even though the 1000v can be configured so that if the VSM gets disconnected/powered-off/blownup the system ports can still be forwarded.  But that is voodoo.  Most say:  Give me a simple access port so I can do my business.

I’m totally on board with this level of thinking.  After all, we don’t want any Jr. Woodchuck network engineer to be taking down our virtual management layer.  So let’s keep it simple.

In fact!  You may not want Jr. Woodchuck Networking engineer to be able to touch your production VLANs for your production VMs.  Well, here’s a solution for you:  You don’t want to do the networking, but you don’t want the networking guy to do the networking either.  So how can we make things right?  Why not just ease into it.  The diagram below, presents, the NIC level of how you can configure your ESXi hosts:

Here, is what is so great about this configuration.  The VMware administrator can use things “business as usual” with the first 6 NICs.

Management A/B teams up with vmknic0 with IP address 192.168.40.101.  This is the management interface and used to talk to vCenter.  This is not controlled by the Nexus 1000v.  Business as usual here.

IP Storage A/B teams up with vmknic1 with IP address 192.168.30.101. This is to communicate with storage devices (NFS, iSCSI).  Not controlled by Nexus 1000v.  Business as usual.

VM Traffic A/B team up.  This is a trunking interface and all kinds of VLANs pass through here.  This is controlled either by a virtual standard switch or using VMware’s distributed Virtual Switch.  Business as usual.  You as the VMware administrator don’t have to worry about anything a Jr. Woodchuck Nexus 1000v administrator might do.

Now, here’s where its all good.  With UCS you can create another vmknic2 with IP address 192.168.10.101.  This is our link that is managed by the Nexus 1000v.  In UCS we would configure this as a trunk port with all kinds of VLANs enabled over it.  This can use the same VNIC Template that the standard VM-A and VM-B used.  Same VLANs, etc.

(Aside:  Some people would be more comfortable with 8 vNICs, Then you can do vMotion over its own native VMware interface.  In my lab this is 192.168.20.101)

The difference is that this IP address 192.168.10.101 belongs on our Control & Packet VLAN.  This is a back end network that the VSM will communicate with the VEM over.  Now, the only VM kernel interface that we need to have controlled by the Nexus 1000v is the 192.168.10.101 IP address.  And this is isolated from the rest of the virtualization stack.  So if we want to move a machine over to the other virtual switch, we can do that with little problem.  A simple edit of the VMs configuration can change it back.

Now, the testing can coexist on a production environment because the VMs that are being tested are running over the 1000v.  Now you can install the VSG, DCNM, the ASA 1000v, and all that good vPath stuff, and test it out.

From the 1000v, I created a port profile called “uplink” that I assign to these two interfaces:

port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,501-512
channel-group auto mode on mac-pinning
no shutdown
system vlan 505
state enabled

By making it a system VLAN, I make it so that this control/packet VLAN stays up. For the vmknic (192.168.10.101) I also created a port profile for control:

port-profile type vethernet L3-control
capability l3control
vmware port-group
switchport mode access
switchport access vlan 505
no shutdown
system vlan 505
state enabled

This allows me to migrate the vmknic over from being managed by VMware to being managed by the Nexus 1000v. My VSM has an IP address on the same subnet as vCenter (even though its layer 3)

n1kv221# sh interface mgmt 0 brief

——————————————————————————–
Port VRF Status IP Address Speed MTU
——————————————————————————–
mgmt0 — up 192.168.40.31 1000 1500

Interestingly enough, when I do the sh module vem command, it shows up with the management interface:

Mod Server-IP Server-UUID Server-Name
— ————— ———————————— ——————–
3 192.168.40.102 00000000-0000-0000-cafe-00000000000e 192.168.40.102
4 192.168.40.101 00000000-0000-0000-cafe-00000000000f 192.168.40.101

On the VMware side, too, it shows up with the management interface: 192.168.40.101

Even though I only migrated the 192.168.10.101 vmknic over.

This configuration works great.  It provides a nice opportunity for the networking team to get with it and start taking back control of the access layer.  And it provides the VMware/Server team a clear path to move VMs back to a network they’re more familiar with if they are not yet comfortable with the 1000v.

Let me know what you think about this set up.

A Nexus 1000v Value Proposition

There’s a lot of collateral about why the Nexus 1000v would be a good thing to have in your virtual environment.  When I talk to people about it one of my first questions is:

“Who manages the virtual networking environment in the data center?”

Most of the time its the virtual machine administrators.  Its usually not the networking team. Typically the network team stops at the physical access layer and anything on the server is the responsibility of the server administrator (which is also the VM administrator)

If the shop is big enough, the second question I usually ask is:

“Would you like the networking team to manage the virtual networking environment?”

Most of the time this is greeted enthusiastically.  After all, the networking team has to troubleshoot the VMware environment anyway.  Why not just give them control of it?  That’s one less problem the virtual administrative team has to deal with.

That’s one of the best benefits of the Nexus 1000v.  Those old lines of demarkations are back.  And the cool thing?  Network visibility is back with a consistent command line.

Here’s a video I made to show this line of demarkation in action

Unfortunately, I’m not very coherent in the video but I hope you get the idea.  Also, sorry for the command line not being visible while I’m typing.  Hopefully I’ll get better with this in time.

Nexus 1000v Layer 3

Layer 3 mode is the recommended way to configure VSM to VEM communication in the Nexus 1000v.   Layer 3 mode keeps things simple and easier to troubleshoot.

I kept my design very simple.  There’s one VLAN (509) that I run my ESXi hosts on.  The IP addresses are 192.168.40.xxx.  Just to give you an example:

vCenter: 192.168.40.2
ESXi Host1: 192.168.40.101
ESXi Host2: 192.168.40.102

N1kv: 192.168.40.31

Using this I had a simple uplink port-profile defined:

nexus1000v(config-port-prof)# show port-profile name uplink

port-profile uplink
 type: Ethernet
 description: 
 status: enabled
 max-ports: 32
 min-ports: 1
 inherit:
 config attributes:
  switchport mode trunk
  switchport trunk allowed vlan 1,501,506,509-510,714,3967
  channel-group auto mode on mac-pinning
  no shutdown
 evaluated config attributes:
  switchport mode trunk
  switchport trunk allowed vlan 1,501,506,509-510,714,3967
  channel-group auto mode on mac-pinning
  no shutdown
 assigned interfaces:
  port-channel1
  port-channel2
  Ethernet3/1
  Ethernet3/2
  Ethernet4/1
  Ethernet4/2
 port-group: uplink
 system vlans: 1,501,506,509-510,714
 capability l3control: no
 capability iscsi-multipath: no
 capability vxlan: no
 capability l3-vn-service: no
 port-profile role: none
 port-binding: static

And a simple management port-profile:

nexus1000v(config-port-prof)# show port-profile name management

port-profile management
 type: Vethernet
 description: 
 status: enabled
 max-ports: 32
 min-ports: 1
 inherit:
 config attributes:
  switchport mode access
  switchport access vlan 509
  no shutdown
 evaluated config attributes:
  switchport mode access
  switchport access vlan 509
  no shutdown
 assigned interfaces:
  Vethernet1
  Vethernet4
 port-group: management
 system vlans: 509
 capability l3control: yes
 capability iscsi-multipath: no
 capability vxlan: no
 capability l3-vn-service: no
 port-profile role: none
 port-binding: static

I had everything set up right… I thought.  The only problem was (before, not in the output above) is that I couldn’t see my VEMs! They were all hooked up in vCenter and I was even running traffic through them. But no VEMs:

nexus1000v(config)# show module vem
No Virtual Ethernet Modules found.

I finally stumbled upon this nice document and realized I hadn’t enabled l3control.  Doing that:

nexus1000v(config-port-prof)# capability l3control

And Bam!  Everything worked:

nexus1000v(config-port-prof)# show module vem
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw      
---  ------------------  ------------------------------------------------  
3    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-469512 (3.0)      
4    4.2(1)SV1(5.1a)     VMware ESXi 5.0.0 Releasebuild-469512 (3.0)      

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
3    192.168.40.101   00000000-0000-0000-cafe-00000000000f  192.168.40.101
4    192.168.40.102   00000000-0000-0000-cafe-00000000000e  192.168.40.102