IPoIB (IP over InfiniBand) vs. RDMA performance
I have partly inherited a Linux HA cluster at the center of which currently serves a connection with DRBD 8 over IPoIB (IP over InfiniBand) between two Debian hosts. It ain't broken, so I won't fix it.
I have also noticed that DRBD 9 supports RDMA, so the question may come up whether to replace the connection with DRBD 9 over RDMA (i.e. "native" InfiniBand) in the future.
Since I don't want to run performance tests on a production system I am wondering: Are there published performance comparisons for IPoIB vs. RDMA/InfiniBand. For instance, could I expect bandwidth/latency gains from switching away from IPoIB in the orders of magniute of either 10%, 50%, or 100%, say? What could be expected?
Solution 1:
have you seen these presentations? https://www.linbit.com/en/drbd-9-over-rdma-with-micron-ssds/ http://downloads.openfabrics.org/Media/Monterey_2015/Tuesday/tuesday_09_ma.pdf
InfiniBand is just a specific network architecture offering RDMA but your performance will depend on what type of applications your are running. My experience is based on academic/research systems mostly using MPI based applications. In certain cases I have seen RDMA performing 20% better than IPoIB. But I am not aware of any such benchmarking but there are plenty of academic papers written and also vendor white papers. If you are just thinking about I/O then consider: file sizes, number of reads vs. writes. RDMA usually provided a big benefit for random small reads but only a small benefit for writes. You might want to read up on RoCE (RDMA over Converged Ethernet) and InfiniBand native RDMA.