I'm contemplating the next restructuring of my medium-size storage. It's currently about 30TB, shared via AoE. My main options are:

  1. Keep as is. It can still grow for a while.
    • Go iSCSI. currently it's a little slower, but there are more options
    • Fibre Channel.
    • InfiniBand.

Personally, I like the price/performance of InfiniBand host adapters, and most of the offerings at Supermicro (my preferred hardware brand) have IB as an option.

Linux has had IPoIB drivers for a while; but I don't know if there's a well-known usage for storage. Most comments about iSCSI over IB talk about iSER, and how it's not supported by some iSCSI stacks.

So, does anybody have some pointers about how to use IB for shared storage for Linux servers? Is there any initiator/target project out there? Can I simply use iSCSI over IPoIB?


Solution 1:

Although it is possible to run iSCSI over InfiniBand via IPoIB, the iSER and SRP protocols yield significantly better performance on an InfiniBand network. An iSER implementation for Linux is available via the tgt project and an SRP implementation for Linux is available via the SCST project. Regarding Windows support: at this time there is no iSER initiator driver available for Windows. But an SRP initiator driver for Windows is available in the winOFED software package (see also the openfabrics.org website).

Solution 2:

So... the thing that most people don't really think about is how Ethernet and IB deliver packets. On one hand, Ethernet is really easy, and it's everywhere. But packet management is not auto-magic nor is it guaranteed-delivery. Granted, modern switching is excellent! Packet loss is no longer the problem that it was way-back-when. However, if you really push the Ethernet, you will start to see packets looping around in there. It's like they don't really know where to go. Eventually, the packets get to where they are supposed to go, but the latency caused by looping has already happened. There IS NO WAY to coax packets to go where they are supposed to.

Infiniband uses guaranteed delivery. Packets and packet delivery is actively managed. What you will see is that IB will peak in performance and then occassionally drop like a square-sine. The drop is over in milliseconds. Then the performance peaks again.

Etherenet peaks out as well, but struggles when use is high. Instead of a square-sine it drops off and then takes a while to step-back-up to peak performance. It looks like a stair on the left side and a straight drop on the right.

That's a problem in large data centers where engineers choose Ethernet over IB because it's easy. Then, the database admins and storage engineers fight back and forth, blaming each other for performance problems. And, when they turn to the network team for answers, the problem gets skirted because most tools see that the "average" network use isn't at peak performance. You have to be watching the packets in order to see this behavior.

Oh! There is one other reason to pick IB over Ethernet. Each IB(FDR) port can go 56 Gb/s. You have to bond (6) 10Ge ports per 1 IB port. That means A-LOT-LESS cabling.

By the way... when you're building financial, data warehouse, bio-logic, or large data systems, you need a lot of IOPS + Bandwidth + Low Latency + Memory + CPU. You can't take any of them out or your performance will suffer. I've been able to push as much as 7Gbytes/second from Oracle to all-flash storage. My fastest full-table-scan was 6 billion rows in 13 seconds.

Transactional systems can scale back on total bandwidth but they still need all of the other components mentioned in the previous paragraph. Ideally, you would use 10Ge for public networks and IB for storage and interconnects.

Just my thoughts... John