What are best practices for managing SSH keys in a team?
Solution 1:
At my company we use LDAP to have a consistent set of accounts across all of the machines and then use a configuration management tool (in our case currently cfengine) to distribute authorized_keys
files for each user across all of the servers. The key files themselves are kept (along with other system configuration information) in a git repository so we can see when keys come and go. cfengine also distributes a sudoers
file that controls who has access to run what as root on each host, using the users and groups from the LDAP directory.
Password authentication is completely disabled on our production servers, so SSH key auth is mandatory. Policy encourages using a separate key for each laptop/desktop/whatever and using a passphrase on all keys to reduce the impact of a lost/stolen laptop.
We also have a bastion host that is used to access hosts on the production network, allowing us to have very restrictive firewall rules around that network. Most engineers have some special SSH config to make this transparent:
Host prod-*.example.com
User jsmith
ForwardAgent yes
ProxyCommand ssh -q bastion.example.com "nc %h %p"
Adding a new key or removing an old one requires a bit of ceremony in this setup. I'd argue that for adding a new key it's desirable for it to be an operation that leaves an audit trail and is visible to everyone. However, due to the overhead involved I think people sometimes neglect to remove an old key when it is no longer needed and we have no real way to track that except to clean up when an employee leaves the company. It also creates some additional friction when on-boarding a new engineer, since they need to generate a new key and have it pushed out to all hosts before they can be completely productive.
However the biggest benefit is having a separate username for each user, which makes it easy to do more granular access control when we need it and gives each user an identity that shows up in audit logs, which can be really useful when trying to track a production issue back to a sysadmin action.
It is bothersome under this setup to have automated systems that take action against production hosts, since their "well-known" SSH keys can serve as an alternative access path. So far we've just made the user accounts for these automated systems have only the minimal access they need to do their jobs and accepted that a malicious user (who must already be an engineer with production access) could also do those same actions semi-anonymously using the application's key.
Solution 2:
Personally I like the idea of each member of staff having one key on a dedicated ssh bastion machine on which they have a basic user account. That user account has 1 ssh key which grants access to all the servers they need to use. ( these other servers should also be firewalled off so only ssh access from the bastion machine is enabled )
Then on their everyday work machines, laptops, tablets etc they can make their own choice of having one key between them or multiple keys.
As a systems admin on that network you have a minimum number of keys to look after (one per dev), can easily monitor ssh access through the network ( as it all routes through the bastion machine ) and if the dev's want multiple keys or just one they share amongst their machines it's no real issue as you only have one machine to update. ( unless the bastions ssh keys are compromised, ut tbh that is far more unlikely than one of the users keys)
Solution 3:
I've had the situation where I've needed to provide SSH key access for a team of 40 developers to ~120 remote customer servers.
I controlled access by forcing the developers to connect through a single "jump host". From that host, I generated private/public keys and pushed them out to the customer servers. If the developers needed access from a laptop, they could use the same keypair on their local system.
Solution 4:
I personally would go with per user then you instantly have accountability and set restrictions far more easily - I don't know what other people think?
Solution 5:
One approach I've heard of, but not used myself, is for each user to have a package (eg, .deb, .rpm) which contains their ssh public key config, as well as any dotfiles they like to customise (.bashrc, .profile, .vimrc etc). This is signed and stored in a company repository. This package could also be responsible for creating the user account, or it could supplement something else creating the account (cfengine/puppet etc) or a central auth system like LDAP.
These packages are then installed onto hosts via whatever mechanism you prefer (cfengine/puppet etc, cron job). One approach is to have a metapackage which has a dependency on the per-user packages.
If you want to remove a public key, but not the user, then the per-user package is updated. If you want to remove a user, you remove the package.
If you have heterogeneous systems and have to maintain both .rpm and .deb files, then I can see this being a bit annoying, although tools like alien might make that somewhat easier.
Like I say, I've not done this myself. The benefit in this approach to me is that it supplements a central LDAP system and central management of user accounts, in that it allows a user to easily update his package to include his .vimrc file, for example, without having to have that file managed by tools like puppet, which a user may not have access to.