Why nfs transfer speed is slower than http?

Consider I have a large file on my nfs server. The server and my desktop is connected by 100mbps network. If I mount a directory on my desktop and then try to copy a big file to local fs I have speed about 3.5MB/s. But if I try to tranfer the same file using wget (nginx on server side) I have about 6.1MB/s.

Why it is so? Why nfs preformance is so bad? And the most important, how to improve this one.

I have linaro (ubuntu clone for arm systems) on server and opensuse 11.4 on client, nfs is version 4.


Benchmark a local server copy; disk to ram and ram to ram; to determine your achievable max throughputs on your server. Network transfers will be at best, slightly slower. Probably much slower/maxed out with only 100Mb/s network. Try a ramdisk with one large file served by nfs to test and determine if this is just a ram vs. disk issue. Reading the docs for fs-cache "NFS requests would be satisfied faster from server memory rather than from local disk." Are you using fs-cache and cachefilesd for the nfs server, preferably with a ssd based raid for the cache dir? With minimal default installs and no tuning I would expect better performance with nginx vs. nfs - ram vs. disk. From the nginx docs - "By default, NGINX handles file transmission itself and copies the file into the buffer before sending it". Even from disk, with commodity hardware and tuning, you should be able to max a 100Mb/s network with either nginx or nfs - approx 10MB/s.