Smoothest workflow to handle SSH host verification errors?

Solution 1:

You can use the ssh-keygen command to remove specific entries by host:

ssh-keygen -R las-db1

If you don't have that command, you could always use sed:

sed -i '/las-db1/d' /root/.ssh/known_hosts

Solution 2:

As a puppet user, my method for resolving this is to actually have my puppet server collect my SSH host keys and publish them to all my systems that make SSH connections.

This way I don't have to worry about removing them. Ninety-nine percent of the time puppet has run and updated the keys for me since I have my agents running every thirty minutes. The exceptions for me are very rare, and so I don't mind a quick edit of the system wide known_hosts if I am not willing to wait.

class ssh::hostkeys {

  @@sshkey { "${::clientcert}_rsa":
    type => rsa,
    key => $sshrsakey,
    tag => 'rsa_key',
  }

  if 'true' == $common::params::sshclient {
    Sshkey <<| tag == 'rsa_key' |>> {
      ensure => present
    }
  }

  file {'/etc/ssh/ssh_known_hosts':
    ensure => present,
    owner => 'root',
    group => 'root',
    mode => 0644,
  }

}

Solution 3:

I would like to add a suggestion that can help you in very specific cases where security is of less concern.

I have a lab environment with machines that get reinstalled often. Every time that happens, new host keys get generated (I could probably save the host key somewhere and set it in the post-install script).

Since security is not an issue for me in this lab environment, and the keys change so often, I have the following in my .ssh/config file:

Host lab-*
  User kenny
  IdentityFile ~/.ssh/lab_id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile=/dev/null

This makes sure that connecting to my lab machines will never cause that error again and my ssh client will just connect without checking the host key.

This is something you should only do if security is of no concern to you at all, because this puts you in a vulnerable position for a man-in-the-middle attack.