SSH Agent Forwarding with Ansible
I’m using Ansible 1.5.3 and Git with ssh agent forwarding (https://help.github.com/articles/using-ssh-agent-forwarding). I can log into the server that I am managing with Ansible and test that my connection to git is correctly configured:
ubuntu@test:~$ ssh -T [email protected]
Hi gituser! You've successfully authenticated, but GitHub does not provide shell access.
I can also clone and update one of my repos using this account so my git configuration looks good and uses ssh forwarding when I log into my server directly via ssh.
The problem: When I attempt the same test shown above using the Ansible command module. It fails with “Permission denied”. Part of the Ansible output (with verbose logging) looks like this:
failed: [xxx.xxxxx.com] => {"changed": true, "cmd": ["ssh", "-T", "[email protected]"], "delta": "0:00:00.585481", "end": "2014-06-09 14:11:37.410907", "rc": 255, "start": "2014-06-09 14:11:36.825426"}
stderr: Permission denied (publickey).
Here is the simple playbook that runs this command:
- hosts: webservers
sudo: yes
remote_user: ubuntu
tasks:
- name: Test that git ssh connection is working.
command: ssh -T [email protected]
The question: why does everything work correctly when I manually log in via ssh and run the command but fail when the same command is run as the same user via Ansible?
I will post the answer shortly if no one else beats me to it. Although I am using git to demonstrate the problem, it could occur with any module that depends on ssh agent forwarding. It is not specific to Ansible but I suspect many will first encounter the problem in this scenario.
Solution 1:
The problem is resolved by removing this line from the playbook:
sudo: yes
When sudo is run on the remote host, the environment variables set by ssh during login are no longer available. In particular, SSH_AUTH_SOCK, which "identifies the path of a UNIX-domain socket used to communicate with the agent" is no longer visible so ssh agent forwarding does not work.
Avoiding sudo when you don't need it is one way to work around the problem. Another way is to ensure that SSH_AUTH_SOCK sticks around during your sudo session by creating a sudoers file:
/etc/sudoers:
Defaults env_keep += "SSH_AUTH_SOCK"
Solution 2:
There are some very helpful partial answers here, but after running into this issue a number of times, I think an overview would be helpful.
First, you need to make sure that SSH agent forwarding is enabled when connecting from your client running Ansible to the target machine. Even with transport=smart
, SSH agent forwarding may not be automatically enabled, depending on your client's SSH configuration. To ensure that it is, you can update your ~/.ansible.cfg
to include this section:
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
Next, you'll likely have to deal with the fact that become: yes
(and become_user: root
) will generally disable agent forwarding because the SSH_AUTH_SOCK
environment variable is reset. (I find it shocking that Ansible seems to assume that people will SSH as root, since that makes any useful auditing impossible.) There are a few ways to deal with this. As of Ansible 2.2, the easiest approach is to preserve the (whole) environment when using sudo
by specifying the -E
flag:
become_flags: "-E"
However, this can have unwanted side-effects by preserving variables like PATH
. The cleanest approach is to only preserve SSH_AUTH_SOCK
by including it in env_keep
in your /etc/sudoers
file:
Defaults env_keep += "SSH_AUTH_SOCK"
To do this with Ansible:
- name: enable SSH forwarding for sudo
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
This playbook task is a little more conservative than some of the others suggested, since it adds this after any other default env_keep
settings (or at the end of the file, if none are found), without changing any existing env_keep
settings or assuming SSH_AUTH_SOCK
is already present.