What's the typical performance of Windows File Sharing (SMB) on a gigabit ethernet network?

The quality of your network cards, switches, and cabling can all have an effect. It might be worth searching for reviews of the NICs and switch(es) you are using to see if other people report them as not performing too well. I'm tol that built-into-the-mothboard NICs are worse of Gbit transfers, though in my experience this doesn't seem to make much, if any, difference in my environment.

As a point of reference, I've just install a new Gbit switches in our office (replacing old 100Mbit switches) and large SMB transfers run at close to (but less than) 30Mbyte/sec between each combination of machines I tested. I've just done a quick test with netcat between two of the machines and got similar results so I don't think that SMB is the bottleneck. The two machines I just tested do have two switches between them which may have an effect, but I guess that effect is minimal given how close the figures where to an SMB transfer to a machine on the same switch.

The best transfer rate I've seen over a Gbit network was a little shy of 50Mbyte/sec at its fastest. This was while transferring a drive image from one machine to a file on the other (for the purposes of converting to a VMWare virtual drive. In that case the two machines were connected via a short cross-over cable rather then via a switch. Coincidently one of the machines in question was one of the machines I've just tested and got ~29Mbyte/sec from - the most likely culprit for the main bottleneck in my case is probably the 8 year old wiring in the building that may have been done on the cheap! A quick (and equally unscientific) test on my little home network sees transfer rates more like 35Mbyte/sec copy a file from a Samba share to a Windows box and 25Mbyte/sec in the other direction (I'm not sure why there is a discrepancy there as in both cases the copy was manages by teracopy on the Windows box - I might have to investigate that further at some later time).

Jumbo frames are going to make a difference for bulk transfers, so I suggest you give that a try if all your kit supports them properly.

To cut a long story short: going by my anecdotal experience your 20Mbyte/sec is a bit slow, but not massively so. All my Windows and Samba installs are pretty much completely untuned, so I suspect that your hardware/wiring are the difference between what I see and what you see.

Edit

Of course, five years on from this answer, hardware and software has moved on. I often see 90+ MiB/sec transfers on machines with Gbit networking even with cheap kit. My home media/backup/other server seems limited to a little over 60 for bulk for transfers but that seems to be samba being CPU-bound on a single core of the box's hardware.


Ahh...this is where it is important for a server guy to understand what's under the hood. Since this is two years old I figure he's solved it already. However for posterity or anyone with a similar issue what he probably ran into is this

(TCP window size * 8bits / RTT in milliseconds) = Max TCP throughput in bps

While you might have a Gigabit network a single TCP flow won't likely be able to get that high.

Here is a simple table assuming you have the default 65535Byte TCP window size in Vista

RTT 10 ms => TCP throughput = 52428000 bps = 52Mbps

RTT 20 ms => TCP throughput = 26214000 bps = 26Mbps

RTT 50 ms => TCP throughput = 10485600 bps = 10Mbps

RTT 100 ms => TCP throughput = 5242800 bps = 5.2Mbps

RTT 150 ms => TCP throughput = 3495200 bps = 4.3Mbps

RTT 200 ms => TCP throughput = 2621400 bps = 2.5Mbps

RTT 300 ms => TCP throughput = 1747600 bps = 1.7Mbps

RTT 500 ms => TCP throughput = 1048560 bps = 1Mbps

At 20Mbytes/sec or 160Mbits/sec your roundtrip latency is likely on the order of about 3 milliseconds. The only other way to speed that up is by using TCP optimizers that do de-dup over the wire or splice together fragments into larger packets. Over a LAN that likely isn't going to gain you much for the expense. If you are using SoHo gear like Linksys or Netgear your latency is probably getting introduced by the lack of shared buffers on the switch. If it's a larger switch like a 24 port, try making sure that the two devices are connected to the same ASIC. This will help the serialization delay, but not by much. If you could drop it down to 2ms you'd get a boost up to about 31-32Mbytes/sec. If they are on two different switches there isn't much you can do without new hardware.