Category Archives: vCenter Server

Nexus 1000v – A kinder gentler approach

One of the issues skeptical Server Administrators have with the 1000v is that they don’t like the management interface being subject to a virtual machine.  Even though the 1000v can be configured so that if the VSM gets disconnected/powered-off/blownup the system ports can still be forwarded.  But that is voodoo.  Most say:  Give me a simple access port so I can do my business.

I’m totally on board with this level of thinking.  After all, we don’t want any Jr. Woodchuck network engineer to be taking down our virtual management layer.  So let’s keep it simple.

In fact!  You may not want Jr. Woodchuck Networking engineer to be able to touch your production VLANs for your production VMs.  Well, here’s a solution for you:  You don’t want to do the networking, but you don’t want the networking guy to do the networking either.  So how can we make things right?  Why not just ease into it.  The diagram below, presents, the NIC level of how you can configure your ESXi hosts:

Here, is what is so great about this configuration.  The VMware administrator can use things “business as usual” with the first 6 NICs.

Management A/B teams up with vmknic0 with IP address 192.168.40.101.  This is the management interface and used to talk to vCenter.  This is not controlled by the Nexus 1000v.  Business as usual here.

IP Storage A/B teams up with vmknic1 with IP address 192.168.30.101. This is to communicate with storage devices (NFS, iSCSI).  Not controlled by Nexus 1000v.  Business as usual.

VM Traffic A/B team up.  This is a trunking interface and all kinds of VLANs pass through here.  This is controlled either by a virtual standard switch or using VMware’s distributed Virtual Switch.  Business as usual.  You as the VMware administrator don’t have to worry about anything a Jr. Woodchuck Nexus 1000v administrator might do.

Now, here’s where its all good.  With UCS you can create another vmknic2 with IP address 192.168.10.101.  This is our link that is managed by the Nexus 1000v.  In UCS we would configure this as a trunk port with all kinds of VLANs enabled over it.  This can use the same VNIC Template that the standard VM-A and VM-B used.  Same VLANs, etc.

(Aside:  Some people would be more comfortable with 8 vNICs, Then you can do vMotion over its own native VMware interface.  In my lab this is 192.168.20.101)

The difference is that this IP address 192.168.10.101 belongs on our Control & Packet VLAN.  This is a back end network that the VSM will communicate with the VEM over.  Now, the only VM kernel interface that we need to have controlled by the Nexus 1000v is the 192.168.10.101 IP address.  And this is isolated from the rest of the virtualization stack.  So if we want to move a machine over to the other virtual switch, we can do that with little problem.  A simple edit of the VMs configuration can change it back.

Now, the testing can coexist on a production environment because the VMs that are being tested are running over the 1000v.  Now you can install the VSG, DCNM, the ASA 1000v, and all that good vPath stuff, and test it out.

From the 1000v, I created a port profile called “uplink” that I assign to these two interfaces:

port-profile type ethernet uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1,501-512
channel-group auto mode on mac-pinning
no shutdown
system vlan 505
state enabled

By making it a system VLAN, I make it so that this control/packet VLAN stays up. For the vmknic (192.168.10.101) I also created a port profile for control:

port-profile type vethernet L3-control
capability l3control
vmware port-group
switchport mode access
switchport access vlan 505
no shutdown
system vlan 505
state enabled

This allows me to migrate the vmknic over from being managed by VMware to being managed by the Nexus 1000v. My VSM has an IP address on the same subnet as vCenter (even though its layer 3)

n1kv221# sh interface mgmt 0 brief

——————————————————————————–
Port VRF Status IP Address Speed MTU
——————————————————————————–
mgmt0 — up 192.168.40.31 1000 1500

Interestingly enough, when I do the sh module vem command, it shows up with the management interface:

Mod Server-IP Server-UUID Server-Name
— ————— ———————————— ——————–
3 192.168.40.102 00000000-0000-0000-cafe-00000000000e 192.168.40.102
4 192.168.40.101 00000000-0000-0000-cafe-00000000000f 192.168.40.101

On the VMware side, too, it shows up with the management interface: 192.168.40.101

Even though I only migrated the 192.168.10.101 vmknic over.

This configuration works great.  It provides a nice opportunity for the networking team to get with it and start taking back control of the access layer.  And it provides the VMware/Server team a clear path to move VMs back to a network they’re more familiar with if they are not yet comfortable with the 1000v.

Let me know what you think about this set up.

ImageX Windows 2008 with vCenter Server

With xCAT I used the imagex capabilities to clone a machine (virtual machine) with vCenter on it and now I’m installing that captured image to another virtual machine.  One reason I do it this way as opposed to creating a VM Template is that now I’m able to deploy to physical and virtual servers.  In addition I can deploy to KVM VMs and VMware VMs.

Anyway, when I restarted the cloned disk, I couldn’t start vCenter Server on it. This is something that I assume happens with all windows images that you run sysprep on regardless or not whether its xCAT induced.

There were a lot of error messages as I trolled through the logs and I searched on all of them:

“Windows could not start the VMware VirtualCenter Server on Local Computer.  For more information, review the System Event Log.”

“the system cannot find the path specified. c:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf” (odd to me, since the file did actually exist!)

“could not start the SQL Server (SQLEXP_VIM) on Local Computer”

I tried all these google terms with little success:

“Virtual Center will not start after reboot”

Finally, I found this post: that finally made sense to me, so I gave it a shot.  Here’s exactly what needs to be done:

Step 1

Log into Windows Server 2008 machine and go to ‘Services’

Step 2

Right click on ‘SQL Server (SQLEXP_VIM)’  and click on Properties

Step 3

Under ‘Log On’ check the Local System account and also check ‘Allow service to interact with desktop’

Step 4

Start the ‘SQL Server (SQLEXP_VIM)’ service.  It should start up without errors.

Step 5

Do the same procedure (Steps 2-4) to the ‘VMware VirtualCenter Server’ service.  This should now start up and you should be able to connect to Virtual Center.

Well, that wasted about 4 hours of my time, but its nice to have a happy ending.

VMware API code to mount NFS Datastore on ESX host.

Since I posted this to the VMware user group today I figured I might as well post this to my blog as well:

Lets say you have a datastore 10.3.0.101 that has a mount point /install/vm directory. You want to mount that datastore onto host vhost04 so you can start creating VMs. Here is how it is done:

[cc lang=’perl’]
use Data::Dumper;
require VMware::VIRuntime;
VMware::VIRuntime->import();
use strict;

# log into a node:
my $conn;
my $esx = shift || ‘vhost04′;
my $server = ‘10.3.0.101’;
my $serverpath = ‘/install/vm';
my $location = ‘nfs_10.3.0.101_install_vm';

eval {
$conn = Vim->new(service_url=>”https://$esx/sdk”);
$conn->login(user_name=>’root’,password=>’c1ust3r’);
};
if($@){
print Dumper($@);
print $@->;fault_string . “\n”;
}

my $hv = $conn->find_entity_view(view_type => ‘HostSystem’);

my $nds = HostNasVolumeSpec->new(accessMode=>’readWrite’,
remoteHost=>$server,
localPath=>$location,
remotePath=>$serverpath);
my $dsmv = $hv->{vim}->get_view(mo_ref=>$hv->configManager->datastoreSystem);

eval {
$dsmv->CreateNasDatastore(spec=>$nds);
};

if ($@) {
print $@->fault_string . “\n”;
print Dumper($@);
}

$conn->logout;
[/cc]