10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T.

The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb.

If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS.

When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb.

Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine.

Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.


Hmmm from http://www.hp.com/rnd/support/faqs/4200vlSeriesfaq.htm#new2008q1

Q: What is the ProCurve Switch vl 1-Port 10-GbE X2 Module (J8766A)? The J8766A module is a single port 10-GbE transceiver module intended to support the existing X2 transceivers for uplink connectivity. Expected throughput is between 2.5 and 7Gbps depending on packet size. However, traffic from a single source MAC address to a single destination MAC address will be limited to a maximum of 1 Gbps throughput. That makes this module ideal for switch-to-switch connections.

» Return to top

Q: What is the recommended customer solution for throughput requirements higher than 1Gbps? Depending on network topology, the following solutions are recommended for throughput needs higher than 1 Gigabit.

  1. For 10G throughput requirements

    • 5400zl with 10G module (for chassis-based deployment) or
    • 3500yl/2900 (for stackable deployments)
  2. For throughput needs greater than 1 Gbps, up to 4 Gbps on a ProCurve 4200vl

    • trunk together four 1Gbps links to achieve 4 Gbps throughput
  3. For throughput greater than 4Gbps or for topologies which are fiber constrained

    • use ProCurve Switch vl 1-Port 10-GbE X2 Module (J8766A)

I hate to say it but if you're using this configuration and these part numbers for server connections then you might have been given bad info, I think the 10Gb support here is for uplinks only. We standardised on the procurve 5400 and 8200 series based on the advice we received from HP and their reseller, and they go like a bat out of hell at 10Gb for server connectivity.