mount a local directory to a remote ssh server

i want to mount a local directory to a remote ssh server. specifically my ~/.gnupg directory so i can use my local keyring everywhere, without storing it remotely.

so i think this solution:

  • keep a local ssh server opened (firewalled as you like)
  • ssh to remote forwarding local:22 to remote:10000
  • launch sshfs to mount localhost:10000:.gnupg to ~/.gnupg

i put this in my ssh/config:

Host remote
        HostName remotehost
        RemoteForward 10000 localhost:22
        User user
        PermitLocalCommand yes
        LocalCommand sshfs -p 10000 [email protected]:/Users/remoteuser/.gnupg .gnupg

doing ssh, i get:

fuse: bad mount point `.gnupg': No such file or directory

if i ran the sshfs manually after the ssh login everything works fine. so i guess that the LocalCommand directive is execute before RemoteForward.
how to solve this?


Solution 1:

Problem: ssh's LocalCommand is executed on the local (client) side, not the remote as you wish. There is no RemoteCommand option, but you can hack the functionality into your config file. Note, all of these assume that your remotehost:.gnupg directory exists before hand.

Option 1: Use two separate host specifications in your ~/.ssh/config:

Host remote
    HostName remotehost
    PermitLocalCommand yes
    LocalCommand ssh -f %r@%n-mount -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg

Host remote-mount
    HostName remotehost
    ForwardAgent yes
    RemoteForward 10000 localhost:22

Downsides: both entries need to exit for each host you want this mount point.

Option 2: Combine ssh options and port forwarding into LocalCommand:

Host remote
    HostName remotehost
    PermitLocalCommand yes
    LocalCommand ssh -f %r@%h -o RemoteForward="10000 localhost:22" -o ForwardAgent=yes -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg

Note the subtle difference in the two LocalCommand lines is the use of %n in the first example and %h in the second. This will work, but has one huge ASSUMPTION: you NEVER ssh to a host by it's true name and only via "short names" that exist in your .ssh/config file, otherwise you'll end up with a infinite loop of ssh connections trying to execute your LocalCommand.

Option 3: Use SSH Multiplexing to setup only one connection to the remote:

Host remote
    HostName remotehost
    PermitLocalCommand yes
    LocalCommand ssh -f %r@%h -o RemoteForward="10000 localhost:22" -o ForwardAgent=yes -p %p sshfs -p 10000 %u@localhost:%d/.gnupg .gnupg
    ControlMaster auto
    ControlPersist 30m
    ControlPath ~/.ssh/controlmasters/%r@%h:%p

I think that's the only winning solution, and can even work in Host * rules, AND doesn't suffer from any downsides. It even solves the issue that second ssh sessions to the same host will NOT attempt to remount the same directory via sshfs.

Caveat: One final issue I've not bothered to resolve: your remote sshfs will persist long after you log out of the remote host. In fact, it will never unmount, unless your localhost goes offline or the connection is otherwise broken.

You could look at some other option to umount that sshfs mount as you log out of the remote host, perhaps using ideas such as this. Or you could play games with the LocalCommand to execute something that watches and self-umounts after it sees some trigger event occur, but seems fragile at best.

Another option would be to wrap ssh commands in some shell or perhaps use ProxyCommand to do something tricky, but I'll leave that as an exercise to the reader.