ansible ssh prompt known_hosts issue
Solution 1:
The ansible docs have a section on this. Quoting:
Ansible has host key checking enabled by default.
If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
If you understand the implications and wish to disable this behavior, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:
[defaults]
host_key_checking = False
Alternatively this can be set by the
ANSIBLE_HOST_KEY_CHECKING
environment variable:
$ export ANSIBLE_HOST_KEY_CHECKING=False
Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.
Solution 2:
To update local known_hosts
file, I ended up using a combination of ssh-keyscan
(with dig
to resolve a hostname to IP address) and ansible module known_hosts
as follows: (filename ssh-known_hosts.yml
)
- name: Store known hosts of 'all' the hosts in the inventory file
hosts: localhost
connection: local
vars:
ssh_known_hosts_command: "ssh-keyscan -T 10"
ssh_known_hosts_file: "{{ lookup('env','HOME') + '/.ssh/known_hosts' }}"
ssh_known_hosts: "{{ groups['all'] }}"
tasks:
- name: For each host, scan for its ssh public key
shell: "ssh-keyscan {{ item }},`dig +short {{ item }}`"
with_items: "{{ ssh_known_hosts }}"
register: ssh_known_host_results
ignore_errors: yes
- name: Add/update the public key in the '{{ ssh_known_hosts_file }}'
known_hosts:
name: "{{ item.item }}"
key: "{{ item.stdout }}"
path: "{{ ssh_known_hosts_file }}"
with_items: "{{ ssh_known_host_results.results }}"
To execute such yml, do
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook path/to/the/yml/above/ssh-known_hosts.yml
As a result, for each host in the inventory, all supported algorithms will be added/updated in the known_hosts
file under hostname,ipaddress pair record; such as
atlanta1.my.com,10.0.5.2 ecdsa-sha2-nistp256 AAAAEjZHN ... NobYTIGgtbdv3K+w=
atlanta1.my.com,10.0.5.2 ssh-rsa AAAAB3NaC1y ... JTyWisGpFeRB+VTKQ7
atlanta1.my.com,10.0.5.2 ssh-ed25519 AAAAC3NaCZD ... UteryYr
denver8.my.com,10.2.13.3 ssh-rsa AAAAB3NFC2 ... 3tGDQDSfJD
...
(Provided the inventory file looks like:
[master]
atlanta1.my.com
atlanta2.my.com
[slave]
denver1.my.com
denver8.my.com
)
As opposed to the Xiong's answer, this would properly handle the content of the known_hosts
file.
This play is especially helpful if using virtualized environment where the target hosts get re-imaged (thus the ssh pub keys get changed).
Solution 3:
Disabling host key checking entirely is a bad idea from a security perspective, since it opens you up to man-in-the-middle attacks.
If you can assume the current network isn't compromised (that is, when you ssh to the machine for the first time and are presented a key, that key is in fact of the machine and not an attacker's), then you can use ssh-keyscan
and the shell module to add the new servers' keys to your known hosts file (edit: Stepan's answer does this a better way):
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: ec2.instances
(Demonstrated here as you would find after ec2 provisioning.)
Solution 4:
Following @Stepan Vavra's correct answer. A shorter version is:
- known_hosts:
name: "{{ item }}"
key: "{{ lookup('pipe', 'ssh-keyscan {{ item }},`dig +short {{ item }}`') }}"
with_items:
- google.com
- github.com