Not getting gigbit from a gigabit link?

Check out this thread. One of the contributors (Frennzy) outlines this very nicely. I'll quote:

The "real" speed of gigabit ethernet is...

1Gbps.

That is to say, it will transfer bits at the rate of 1 billion per second.

How much data throughput you get is related to various and sundry factors:

NIC connection to system (PCI vs PCIe vs Northbridge, etc).

HDD throughput.

Bus contention.

Layer 3/4 protocol and associated overhead.

Application efficiency (FTP vs. SMB/CIFS, etc)

Frame size.

Packet size distribution (as relates to total throughput efficiency)

Compression (hardware and software).

Buffer contention, windowing, etc.

Network infrastructure capacity and architecture (number of ports, backplane capacity, contention, etc)

In short, you won't really know, until you test it. NetCPS is a good tool for this, as are many others.

And this, later in the thread (my highlighting):

Stop thinking like this. Stop it now. All of you.

As much as you would like to figure out kilo-or mega BYTE per second transfer, the fact is that it is variable, even when network speed remains constant. Network "speed" (bits per second) is absolute. Network throughput (actual payload data per second) is not.

To the OP: will you, in general, see faster data transfers when switching from 100Mbps to 1000Mbps? Almost definitely. Will it be anywhere close to the theoretical maximum? No. Will it be worth it? That's for you to decide.

If you want to talk about network speeds, talk about network speeds. If you want to talk about data throughput, talk about data throughput. The two are not tied together in a 1-1 fashion.


The term 'theoretical maximum' is thrown around, but it does have a practical application with Ethernet technologies. On a CSMA/CD system like Ethernet, you can only send about half the bandwidth of traffic as the wire holds, often a bit less. The reason is because once you try to get beyond that 'maximum', then transceivers will start to detect collisions more than they are transmitting packets. Then exponential back-off comes into play and packet transmission degrades even further. Token ring got around this, but it had a lot of its own issues and isn't really used much anymore, I believe. Ethernet/IP became the de facto standard.

Uplink technologies, like T3, use asynchronous pairs which allow for the full throughput on each wire, but it is also not an Ethernet-based protocol.

While you are using basic, standard Ethernet devices, there will always be the 'theoretical maximum'.


Talking about CSMA/CD in the context of GbE is entirely bogus. Gigabit Ethernet, or any "full-duplex" Ethernet, does not use CSMA/CD. And while GbE did still maintain the theoretical possibility for half-duplex operation I'm not at all sure there was ever any actual production GbE kit that did half-duplex.

As for why the OP only acheived 300-odd Mbit/s across a 1000 Gbit/s link, I would suggest gathering netstat statistics for TCP from before and after each netperf run, and include the -c and -C global command-line options to see what the CPU utilization is on either end. Perhaps something is dropping packets, or perhaps the CPU on one side or the other is becoming saturated. If the systems on either end are multicore, definitely check the per-core utilizations either with an external tool, or by wading through netperf debug output.

Other netperf questions probably best left to the netperf-talk at netperf.org mailing list.