How can I forward a gpg key via ssh-agent?
I can use the ssh configuration file to enable the forwarding of ssh keys added to ssh-agent. How can I do the same with gpg keys?
Solution 1:
EDIT: This answer is obsolete now that proper support has been implemented in OpenSSH, see Brian Minton's answer.
SSH is only capable of forwarding tcp connections within the tunnel.
You can, however, use a program like socat
to relay the unix socket over TCP, with something like that (you will need socat both on the client and the server hosts):
# Get the path of gpg-agent socket:
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
# Forward some local tcp socket to the agent
(while true; do
socat TCP-LISTEN:12345,bind=127.0.0.1 UNIX-CONNECT:$GPG_SOCK;
done) &
# Connect to the remote host via ssh, forwarding the TCP port
ssh -R12345:localhost:12345 host.example.com
# (On the remote host)
(while true; do
socat UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early TCP4:localhost:12345;
done) &
Test if it works out with gpg-connect-agent
. Make sure that GPG_AGENT_INFO is undefined on the remote host, so that it falls back to the $HOME/.gnupg/S.gpg-agent
socket.
Now hopefully all you need is a way to run all this automatically!
Solution 2:
OpenSSH's new Unix Domain Socket Forwarding can do this directly starting with version 6.7.
You should be able to something like:
ssh -R /home/bminton/.gnupg/S.gpg-agent:/home/bminton/.gnupg/S-gpg-agent -o "StreamLocalBindUnlink=yes" -l bminton 192.168.1.9
Solution 3:
In new versions of GnuPG or Linux distributions the paths of the sockets can change. These can be found out via
$ gpgconf --list-dirs agent-extra-socket
and
$ gpgconf --list-dirs agent-socket
Then add these paths to your SSH configuration:
Host remote
RemoteForward <remote socket> <local socket>
Quick solution for copying the public keys:
scp .gnupg/pubring.kbx remote:~/.gnupg/
On the remote machine, activate GPG agent:
echo use-agent >> ~/.gnupg/gpg.conf
On the remote machine, also modify the SSH server configuration and add this parameter (/etc/ssh/sshd_config):
StreamLocalBindUnlink yes
Restart SSH server, reconnect to the remote machine - then it should work.
Solution 4:
I had to do the same, and based my script on the solution by b0fh, with a few tiny modifications: It traps exits and kills background processes, and it uses the "fork" and "reuseaddr" options to socat, which saves you the loop (and makes the background socat cleanly kill-able).
The whole thing sets up all forwards in one go, so it probably comes closer to an automated setup.
Note that on the remote host, you will need:
- The keyrings you intend to use to sign/en/decrypt stuff.
- Depending on the version of gpg on the remote, a fake
GPG_AGENT_INFO
variable. I prefill mine with~/.gnupg/S.gpg-agent:1:1
- the first 1 is a PID for the gpg agent (I fake it as "init"'s, which is always running), the second is the agent protocol version number. This should match the one running on your local machine.
#!/bin/bash -e
FORWARD_PORT=${1:-12345}
trap '[ -z "$LOCAL_SOCAT" ] || kill -TERM $LOCAL_SOCAT' EXIT
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
if [ -z "$GPG_SOCK" ] ; then
echo "No GPG agent configured - this won't work out." >&2
exit 1
fi
socat TCP-LISTEN:$FORWARD_PORT,bind=127.0.0.1,reuseaddr,fork UNIX-CONNECT:$GPG_SOCK &
LOCAL_SOCAT=$!
ssh -R $FORWARD_PORT:127.0.0.1:$FORWARD_PORT socat 'UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early,fork,reuseaddr TCP4:localhost:$FORWARD_PORT'
I believe there's also a solution that involves just one SSH command invocation (connecting back from the remote host to the local one) using -o LocalCommand
, but I couldn't quite figure out how to conveniently kill that upon exit.
Solution 5:
As an alternative to modifying /etc/ssh/sshd_config
with StreamLocalBindUnlink yes
, you can instead prevent the creation of the socket files that need replacing:
systemctl --global mask --now \
gpg-agent.service \
gpg-agent.socket \
gpg-agent-ssh.socket \
gpg-agent-extra.socket \
gpg-agent-browser.socket
Note that this affects all users on the host.
Bonus: How to test GPG agent forwarding is working:
- Local:
ssh -v -o RemoteForward=${remote_sock}:${local_sock} ${REMOTE}
- Check that
${remote_sock}
is shown in the verbose output from ssh - Remote:
ls -l ${remote_sock}
- Remote:
gpg --list-secret-keys
- You should see lots of
debug1
messages from ssh showing the forwarded traffic
- You should see lots of
If that doesn't work (as it didn't for me) you can trace which socket GPG is accessing:
strace -econnect gpg --list-secret-keys
Sample output:
connect(5, {sa_family=AF_UNIX, sun_path="/run/user/14781/gnupg/S.gpg-agent"}, 35) = 0
In my case the path being accessed perfectly matched ${remote_sock}
, but that socket was not created by sshd
when I logged in, despite adding StreamLocalBindUnlink yes
to my /etc/ssh/sshd_config
. I was created by systemd upon login.
(Note I was too cowardly to restart sshd, since I've no physical access to the host right now. service reload sshd
clearly wasn't sufficient...)
Tested on Ubuntu 16.04