Slow copying between NFS/CIFS directories on same server

Hmm ... I did notice a few issues and I think I found a smoking gun or two. But, first I'll ask a few questions and make assumptions about your probable answers. I will present some data that will seem irrelevant at first, but I promise, it will be worth the read. So, please, wait for it ... :-)

  • I'm assuming that by raid10, you have four drives total stripe+redundant.
  • And, that you're using Linux autoraid (vs. a hardware raid controller).
  • I'm also assuming that all SATA ports can transfer independently of one another at full transfer speed, bidirectionally, and that all SATA ports are of equally high speed. That is, if you've got a single SATA adapter/controller it is fully capable of running all disks connected to it at rated speed.
  • I'm also assuming you've got the latest SATA spec drives + controller. That is, 6.0Gb/s. That's 600MB/sec. To be conservative, let's assume we get half that, or 300MB/sec
  • The client-to-server is NIC limited (at 100MB/s), so it can't stress the drives enough.
  • In order to go faster than the NIC, when doing NFS-to-NFS, I'm assuming you're using localhost, so you can go beyond NIC limited speeds (which I think you said you did bonding to show that's not an issue)

ISSUE #1. Your reported transfer rates for even the fast local-to-local seem low. With disks that fast, I would expect better than 150MB/s. I have a 3-disk raid0 system that only does 3.0Gb/s [adapter limited] and I can get 450 MB/s striped. Your disks/controller are 2x the speed of mine, so, I would expect [because of the striping] for you to get 300MB/sec not just 150MB/sec for local-to-local. Or, maybe even 600MB/sec [minus FS overhead which might cut it in half for sake of discussion]

  • From your zpool information, I noticed that your disk configuration is Western Digital and it is:
mirror-0
  ata-WDC_WD20EFRX-68AX9N0
  ata-WDC_WD20EFRX-68EUZN0
mirror-1
  ata-WDC_WD20EFRX-68AX9N0
  ata-WDC_WD20EFRX-68EUZN0
  • Now let's compare this to your iostat information. It might be nice to have iostat info on all drives for all tests, but I believe I can diagnose the problem with just what you provided
  • sdb and sdd are maxed out
  • As you noted, this is strange. I would expect all drives to have balanced usage/stats in a raid10. This is the [my] smoking gun.
  • Combining the two. The maxed out drives are a slightly different model than the ones that are not. I presume the zpool order is sda/sdb sdc/sdd [but it might be reversed]
  • sda/sdc are 68AX9N0
  • sdb/sdd are 68EUZN0

ISSUE #2. From a google search on WD20EFRX + 68AX9N0 + 68EUZN0, I found this page: http://forums.whirlpool.net.au/archive/2197640

It seems that the 68EUZN0 drives can park their heads after about 8 seconds whereas the other is smarter about this [or vice versa].

So, given NFS caching + FS caching + SSD caching, the underlying drives may be going idle and parking their heads. My guess is that the extra layer of caching of NFS is what tips it over the edge.

You can test this by varying the FS sync options, maybe sync is better than async. Also, if you can, I'd rerun the tests with SSD caching off. The idea is to ensure that parking does not occur and see the results.

As mentioned on the web page, there are some utilities that can adjust the park delay interval. If that's the option, be sure to research it thoroughly.

UPDATE:

Your problem can be viewed as a throughput problem through a store-and-forward [with guaranteed delivery] network. Note, I'm not talking about the NIC or equiv.

Consider that an I/O operation is like a packet containing a request (e.g. read/write, buf_addr, buf_len) that gets stored in a struct. This request packet/struct gets passed between the various cache layers: NFS, ZFS, device driver, SATA controller, hard disk. At each point, you have an arrival time at the layer, and a departure time when the request is forwarded to the next layer.

In this context, the actual disk transfer speed, when the transfer actually happens is analogous to the link speed. When most people consider disks, they only consider transfer speed and not when the transfer was actually initiated.

In a network router, packets arrive, but they aren't always forwarded immediately, even if the outbound link is clear. Depending on router policy, the router may delay the packet for a bit, hoping that some more packets will arrive from other sources [or from the same source if UDP], so the router can aggregate the smaller packets into a large one that can be transmitted on the outbound link more efficiently.

For disks, this "delay" could be characterized by a given FS layer's cache policy. In other words, if a request arrives at a layer at time T, instead of it departing the layer at T+1 and arriving at the next layer at T+1, it could depart/arrive at T+n. An FS cache layer might do this, so that it can do seek order optimization/sorting.

The behavior you're seeing is very similar to a TCP socket that reduced its window because of congestion.

I think it's important to split up the testing. Right now, you're doing read and write. And, you don't know which is the limiting factor/bottleneck. I think it would be helpful to split up the tests into read or write. A decent benchmark program will probably do this. What I'm advocating is a more sophisticated version of [these are just rough examples, not the exact arguments to use]:

For write, time dd if=/dev/zero of=/whatever_file count=64g
For read, time dd if=/whatever of=/dev/null count=64g
The reason for 64GB is that's 2x your physical ram and eliminates block cache effects. Do the sync command between tests.

Apply this on local FS and repeat on NFS.

Also, do the read test on each of /dev/{sda,sdb,sdc,sdd}

Do iostat during these tests.

Note that doing the read test on the physical raw disk gives you a baseline/maximum for how fast the hardware can actually go. The raw device reads should approximate the maximum capabilities of the transfer specs of your drives. Expected write speed should be similar for a hard disk. If not, why not? All disks should test at about the same speed. What I'm going for here is the reason for why only two disks are maxed out in your previous tests.

Doing the math, with 32GB and assuming a maximum transfer speed of 600MB/sec, it would take a minimum of 50 seconds to fill/flush that. So, what is the park timeout set to?

Also, you can vary things a bit by reducing the amount of physical ram the kernel will allow via the mem= boot parameter. Try something like mem=8g to see what effect it has. There are also some /proc entries that can adjust the block layer cache flush policy.

Also, my FSes are ext4 and mounted with noatime. You may want to consider zfs set atime=off ...

Also, watch the system log. Sometimes, a drive reports a sense error and the system reconfigures it to use a lower transfer speed.

Also, take a look at the SMART data for the drives. Do you see anything unusual? Excessive soft retries on a given drive (e.g.).

Like I've said, the local disk performance is much less than I expect. I think that problem needs to solved first before tackling the entire system with NFS. If the raid disks all had balanced utilization and was in the ballpark, I'd be less concerned about it.

My system [which also has WDC disks] is not configured for NFS (I use rsync a lot). I've got some pressing stuff I have to do for next 1-2 days. After that, I'll have the time to try it [I'd be curious myself].

UPDATE #2:

Good catch on the ZFS unbalance issue. This helps explain my "issue #1". It might explain NFS's flakiness as well if the rebalance operations somehow confused NFS with regard to latency/timing, causing the "TCP window/backoff" behavior--not super high probability but a possibility none the less.

With rsync testing no need/desire to use NFS. If you can ssh into the server, rsync and NFS are redundant. With NFS, just use cp, etc. To do rsync, go directly to the underlying ZFS via ssh. This will work even without an NFS mount [here's the rsync config I use]:

export RSYNC_SSH="/usr/bin/ssh"
export SSH_NOCOMPRESS=1
rsync /wherever1 server:/zfsmount/whatever
Doing this localhost or bonded may get the performance into what you expect (sans the ZFS unbalance issue). If so, it clearly narrows the problem to NFS itself.

I've perused some of the kernel source for NFS. From what little I looked at, I didn't like what I saw regarding timeliness. NFS started back in 80's when links were slow, so it [still] has a lot of code to try to conserve NIC bandwidth. That is, only "commit" [to] an action when absolutely necessary. Not necessarily what we want. In my fanciful network router policy analogy, NFS's cache would seem to be the one with the "T+n" delay.

I'd recommend doing whatever you can to disable NFS's cache and have it pass its request to ZFS ASAP. Let ZFS be the smart one and NFS be the "dumb pipe". NFS caching can only be generic in nature (e.g. it won't even know that the backing store is a RAID or too much about the special characteristics of the base FS it's mounted on). ZFS has intimate knowledge of the RAID and the disks that compose it. Thus, ZFS's cache can be much more intelligent about the choices.

I'd say try to get NFS to do a sync mount--that might do the trick. Also, I saw something about noatime, so turn on that option as well. There may be other NFS tuning/mount options. Hopefully, if NFS is the usual suspect, it can be reconfigured to work well enough.

If, on the other hand, no option brings NFS to heel, would rsync over ssh be a viable alternative? What is the actual use case? It seems that you're using NFS as a conduit for large bulk transfers that need high performance (vs. [say] just as an automount point for user home directories). Is this for things like client backup to server, etc?