Network performance in large transfer of data

I'm using DD over Netcat to copy a hard disk from one system to another, straight clone.

I booted RIP on each system.

target system: nc -l -p 9000 |pv|dd of=/dev/hda source system: dd if=/dev/hda |pv|nc 9000 -q 10

The transfer seems to be hovering around 10 or 11 MB/s, with bursts near 18 registering. The two systems are connected to a gigabit switch. Ethtool eth0 on both is showing:

Settings for eth0:
    Supported ports: [ TP ]
    Supported link modes:   10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Supports auto-negotiation: Yes
    Advertised link modes:  10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: umbg
    Wake-on: g
    Current message level: 0x00000007 (7)
    Link detected: yes

I think I may be confusing some numbers for the transfer rates, but is this an expected speed for transferring the data?

EDIT: I just tried using two different cables that are marked as 5e compliant, and used a crossover connector to link the two systems directly. While ethtool still says they're set to a speed of 1000Mb/s, the transfer rate appears to be only slightly higher than before. Either the drives are sucktacular, the network cards are crud, or the processor must be bottlenecking, I'm guessing.

EDIT2 I just tried taking a second hard disk from a unit that needs to be cloned to and physically connecting it to the master clone. Originally one IDE channel went to a HD and another went to the CD-ROM. I took the master's hard disk and connected it to the same channel as the CD-ROM, so they should be /dev/hda and /dev/hdb. I took the cable that was on the CD and connected it to the "blank slate", so it should be /dev/hdc.

I rebooted and ran "dd if=/dev/hda|pv|dd of=/dev/hdc", and I'm getting a whopping...10 MB/s. It's fluctuating wildly between 8 MB/s and spiking to 12.

So...I'm thinking it is the hard disks that are giving crap performance...I'm just so used to network being a bottleneck that it's weird for me to think of disks as being the problem!


What does dd if=/dev/zero of=/dev/hda on the destination and dd if=/dev/hda of=/dev/null on the source give as the lower of the two that will give you a best case.

If you have spare cpu conside gzip -fast

It is worth conisdering setting jumbo packets (large mtu )


I would expect more like 20 MB/s , are you using cat 6 / 5e cabling?

I would also run iostat (part of the sysstat package) and maybe see if the iostat thinks the drives are at 100% utilization:

iostat -c 2 -x

Here is a nice article on gigabit networks by Tom's Hardware: Gigabit Ethernet: Dude, Where's My Bandwidth?