faster way to mount a remote file system than sshfs?

Solution 1:

sshfs is using the SSH file transfer protocol, which means encryption.

If you just mount via NFS, it's of course faster, because not encrypted.

are you trying to mount volumes on the same network? then use NFS.

Solution 2:

If you need to improve the speed for sshfs connections, try these options:

oauto_cache,reconnect,defer_permissions,noappledouble,nolocalcaches,no_readahead

command would be:

sshfs remote:/path/to/folder local -oauto_cache,reconnect,defer_permissions

Solution 3:

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transfered data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

Solution 4:

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

sshfs_opts="-o auto_cache -o cache_timeout=115200 -o attr_timeout=115200   \
-o entry_timeout=1200 -o max_readahead=90000 -o large_read -o big_writes   \
-o no_remote_lock"

You should also check out the options recommended by Meetai in his answer.

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

find project/folder/on/mounted/fs/A > /dev/null &
find project/folder/on/mounted/fs/B > /dev/null &
find project/folder/on/mounted/fs/C > /dev/null &

If you also want to pre-cache file contents, you could try this:

tar c project/folder/on/mounted/fs > /dev/null &

Obviously this will take much longer, will transfer a lot of data, and requires you to have a huge cache size. But when it's done, accessing the files should feel nice and fast.