How can I add a .pem private key fingerprint entry to known_hosts before connecting with ssh?

You have 2 key pairs at play there:

  1. Server's Private/Public key.

ssh daemon on the server has a set of private keys created and stored in the /etc/ssh/ folder

The RSA fingerprint you are getting from the server comes from the public key corresponding to the /etc/ssh/ssh_host_rsa_key private key

  1. User's Private/Public key.

This is a keypair you own. The private key should be securely stored on your computer and used to authenticate to the server. The public key is on the server, in your profile's authorized_keys file: ~/.ssh/authorized_keys


So there are 2 different public keys, and their fingerprints will not match, unless you use the same private key as one on the server, which is unlikely.

To get rid of warning do exactly as it has been asking: put fingerprint of the server into the /var/lib/jenkins/.ssh/known_hosts file.


If I understand you correctly, the private key file is in your possession and you'd like to get the fingerprint of it so that you can add it into your known_hosts file. If that's right, then here's what you do:

$ ssh-keygen -yf /path_to_private_key/key_file_name

That will output something like:

ssh-rsa AAAAB3NzaC....

Lastly, prefix that with the IP address to which you SSH, so that you have this:

10.200.25.5 ssh-rsa AAAAB3NzaC....

and you can add that as a line in your known_hosts file.


My underlying confusion was that I thought I had the exact same pair of private and public keys that the server did. Instead what's happening is when I create a key pair and assign it to a new EC2 instance, the EC2 instance is getting the public key of that pair put into its authorized_keys which allows me to connect to it with the private key that I download when creating the pair in AWS.

I can use the fingerprinting command that comes with AWS, but it's only good to validate that the private key that I have, matches the public key they have stored, and will put into the authorized_keys.

Every time a new EC2 instance comes up, it generates a collection of its own private/public keys for different algorithms like RSA and DSA. I must now scrape the logs to get the fingerprints for those keys so that I can validate that they match the host I'm connecting to.

So the steps are.

  1. Generate the EC2 instance, keep the key you get.
  2. Give that key from step 1 to Jenkins so that it can connect to the host.
  3. Use the get-console-output command to scrape the fingerprints for the keys from the logs.
  4. Attempt to connect to the remote instant with the key from step 1. Use the key fingerprint from that error message to validate against the fingerprint you scraped in step 3.
  5. Once you've validated, then you know it's safe to add the remote host.
  6. Profit!!!

Keep in mind the vital issue here is that you can't trust that the host you're connecting to isn't a man in the middle attack. If you blindly accept the key without validating it's fingerprint in step 4, you may not be connecting to the server you expect to be. By validating in step 4 you know that your connection is secure (because of SSH's cryptography), but crucially you also know WHO you are connected to, because only one person is going to have the key-pair fingerprint matching the one you expect.

EDIT: The get-console-output command is not reliable for automation. It's ONLY intended for ad-hoc troubleshooting. The core problem is that AWS will arbitrarily cut parts of the log out, and/or buffer it in a way that you must wait a long time to see the complete entry.

Instead I'm trying to upload the keys in the user data script, bring the system down, clear the user data script so it's not accessible (because it has a private key in it), and then bring the instance back up. I need to reboot it anyways because updated packages might require a reboot so I can kill two birds with one stone here.

https://alestic.com/2012/04/ec2-ssh-host-key/