10GbE with FCoE with SAN and LAN traffic is a good solution?

I'm building a small datacenter from the ground, and I'm considering 10GbE. I'm looking for some advices about this decision.

I've compared Infiniband, FC, 10GbE and LACP with GigE, and in the end 10GbE appears to be the best solution at this moment.

Talking about the datacenter: it will have one or two storages (two in case of failover scenario), and three 1U machines running some Hypervisor (XenServer is my favorite). The VM's will be in the storage, so the Hypervisors will boot from the storage or I will put some small SSDs in the 1U machines just to load the Hypervisor.

So, the problem here is: I'm little confused with what I've to buy to make the network. I've saw some expensive switches, like the Cisco Nexus 5000/7000 with a lot of features but I don't know if I really need this guys.

I don't have FC, so it's safe to buy single 10GbE switches without "converged networking" technologies? Or I should take one of those to get FCoE running?

Another question: iSCSI over 10GbE would be better than FCoE? (I'm considering FCoE is better because we don't use the IP stack in FCoE).

Thanks in advance, and I really appreciate some opinions here.


Solution 1:

I'm with Tom here! IB (even ancient one) is cheaper and faster compared to 10 GbE.

People get some good numbers from the basically el cheapo gear:

http://forums.servethehome.com/networking/1758-$67-ddr-infiniband-windows-1-920mb-s-43k-iops.html

The problem is TCP-over-IB sucks (kills performance adding huge latency) and native IB support is very limited. SMB Direct with Windows Server 2012 R2 is great (when it works).

Solution 2:

The decision between technologies should be made on an evaluation of what your needs/budget/expertise are. Obviously, your choice is highly dependent on what type of storage hardware you have or will purchase, along with your networking infrastructure. Traditionally, SANs have used fibre channel due to their high speed, but with the advent of 10GbE, Ethernet has become a viable contender. Depending on the utilization level of your data center, you may even be able to get away with using 1GbE and MPIO, with the ability to scale up later. Most major vendors will give you the option between iSCSI, FCoE, and FC offerings, and the choice among these should be based on what your current (or desired) infrastructure is, taking into consideration your staff expertise.

I cannot comment on the use of Infiniband, as I have never used it myself, other than it's use is less prevalent than these other technologies, along with correspondingly fewer vendors to choose from. The side risk is finding staff that can support less common equipment.

Personally, if you (and your staff) have no experience in (nor existing infrastructure) with fibre channel, my recommendation would be to choose an iSCSI offering as your learning curve (and possibly, your implementation costs) will be much lower. Most people forget that hardware costs are tiny compared to labor. I spend ten times more on personnel costs than I do on my hardware budget, so if some type of hardware is a little more expensive but well understood by my staff (or I can more easily find someone to work on it), that becomes the obvious choice. Unless, of course, you're looking for a new learning opportunity. :P

Solution 3:

Why?

Given the high prices and low bandwidth I would always prefer Infiniband to 10g. Plus a 1g based outlink - unless you have more than 1g uplink bandwidth.

Due to other constraints I am using 10g on some servers (mostly - nearly all are 1g and the Netgear TXS 752 we use as backbone has 4x10g spf+) and the price factor of the network cards is - ouch. Compared to the much faster Infiniband.

Solution 4:

FCoE makes sense if you have an existing FC infrastructure and need to feed FC LUNs to your new servers and they don't have FC HBAs (or you're running out of licensed FC ports on your FC switches - that's the same). So you take 10 GbE and run FCoE to cut down the costs on FC gear. Building from scratch FCoE is pointless, run iSCSI (or SMB Direct with RDMA if you're on the "dark side") over 10 GbE and be happy. With a recent and decent multi-GHz and multi-core CPUs and both TCP and iSCSI at least partially offloaded to NIC ASICs there's no different between storage-over-TCP and storage-over-raw-Ethernet. Good luck my friend!