Cisco UCS + NetApp + MDS configuration Part 1

In a recent training event I went to this week, they stated that a FlexPod = Cisco UCS + Nexus + NetApp. If you don’t have those 3, then its not a FlexPod. However, if you have UCS + NetApp, then you still get the benefits of the back end support being tied together. This means that if you open a support request with TACC and it turns out that it may be a NetApp Filer issue, then TACC will be able to call up NetApp they will both work with the end user to resolve the problem.

In our local lab, we aren’t fortunate enough to have a Nexus 5000 set up. Instead we have MDS 9148s. These are nice boxes, so I wanted to put them to use. What we created was not a Cisco Validated design, however, it still worked for us, and I wanted to show how I set it up. Since its just my lab, its a very simple configuration, and I’ll try to update this post as I remember things. You should note, that the best documented solution is to use Nexus 5000s. (You would then have a FlexPod) and use 10GbE for FCoE and/or NFS.

For this first part, I just wanted to show how we did the cabling.

What you can see from the cabling is that the Fabric Interconnects are connected to MDSes. They’re not cross connected. This is because the Fabric Interconnects are operating in the default “End Host Mode”. You have to look at it like each Fabric Interconnect is a PCI adapter off of a server. (Yes, I know its much more than that). but if you looked at it like each Fabric Interconnect was an HBA off of a single server, then this topology makes a lot more sense. In the case of one server, its like having 2 dual port HBAs each one being connected redundantly to a a fiber channel switch. (The MDS in this case).

On the back end of the MDS, the NetApp is cross connected to each MDS switch. This provides redundancy so that if any of the components fail, the solution will still work. For example, if MDS 9148a loses power, then the solution still works as the traffic can flow through MDS 9148b. If Fabric Interconnect A fails, traffic can flow through Fabric Interconnect B. If one of the Filer fails, since the Filers run in cluster mode, then the other filer will take over and provide the datastores to the servers.

In my next post, I’ll talk a little more of how we configured this.

Comments are closed.