Inject host's SSH keys into Docker Machine with Docker Compose

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.

Let's say that one of the containers is called "stack". Now what I want to do is call:

docker-composer run stack ssh [email protected]

My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.

Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?


Solution 1:

You can add this to your docker-compose.yml (assuming your user inside container is root):

volumes:
    - ~/.ssh:/root/.ssh

Also you can check for more advanced solution with ssh agent (I did not tried it myself)

Solution 2:

WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.

(I haven't checked to make sure, but) My current impression is that:

  • In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
  • Ability to change secrets permissions with Linux host may be limited

See answer comments for more details.


Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:

---
version: '3.1' # Note the minimum file version for this feature to work
services:
  stack:
    ...
    secrets:
      - host_ssh_key

secrets:
  host_ssh_key:
    file: ~/.ssh/id_rsa

Then the new secret file can be accessed in Dockerfile like this:

RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa

Secret files won't be copied into container:

When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem

For more details please refer to:

  • https://docs.docker.com/engine/swarm/secrets/
  • https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets

Solution 3:

If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.

Straightforward approach

One might think that there’s no problem. Just mount your ssh folder:

...
volumes:
  - ~/.ssh:/root/.ssh:ro
...

This should be working, right?

User problem

Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.

...
volumes:
  - ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
  SSH_USER: $USER
...

# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh

cat <<EOF >> ~/.ssh/config
  User $SSH_USER
EOF

SSH key passphrase problem

In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container. We could remove a passphrase (see example below), but there’s a security concern.

openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa

SSH agent solution

You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that? That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:

echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh

SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request. Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:

environment:
  SSH_AUTH_SOCK: $SSH_AUTH_SOCK
  ...
volumes:
  - $SSH_AUTH_SOCK:$SSH_AUTH_SOCK

We don’t even need to copy keys in this case. To confirm that keys are available we can use ssh-add utility:

if [ -z "$SSH_AUTH_SOCK" ]; then
  echo "No ssh agent detected"
else
  echo $SSH_AUTH_SOCK
  ssh-add -l
fi

The problem of unix socket mount support in Docker for Mac

Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.

So, is that a dead end? No, there is a hacky workaround.

SSH agent forwarding solution

Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.

With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:

SSH forwarding

First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.

Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.

After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.

There’s a bunch of scripts that simplifies that process: https://github.com/avsm/docker-ssh-agent-forward

Conclusion

Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.