How can I implement ansible with per-host passwords, securely?
You've certainly done your research...
From all of my experience with ansible what you're looking to accomplish, isn't supported. As you mentioned, ansible states that it does not require passwordless sudo, and you are correct, it does not. But I have yet to see any method of using multiple sudo passwords within ansible, without of course running multiple configs.
So, I can't offer the exact solution you are looking for, but you did ask...
"So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security."
I can give you one view on that. My use case is 1k nodes in multiple data centers supporting a global SaaS firm in which I have to design/implement some insanely tight security controls due to the nature of our business. Security is always balancing act, more usability less security, this process is no different if you are running 10 servers or 1,000 or 100,000.
You are absolutely correct not to use root logins either via password or ssh keys. In fact, root login should be disabled entirely if the servers have a network cable plugged into them.
Lets talk about password reuse, in a large enterprise, is it reasonable to ask sysadmins to have different passwords on each node? for a couple nodes, perhaps, but my admins/engineers would mutiny if they had to have different passwords on 1000 nodes. Implementing that would be near impossible as well, each user would have to store there own passwords somewhere, hopefully a keypass, not a spreadsheet. And every time you put a password in a location where it can be pulled out in plain text, you have greatly decreased your security. I would much rather them know, by heart, one or two really strong passwords than have to consult a keypass file every time they needed to log into or invoke sudo on a machine.
So password resuse and standardization is something that is completely acceptable and standard even in a secure environment. Otherwise ldap, keystone, and other directory services wouldn't need to exist.
When we move to automated users, ssh keys work great to get you in, but you still need to get through sudo. Your choices are a standardized password for the automated user (which is acceptable in many cases) or to enable NOPASSWD as you've pointed out. Most automated users only execute a few commands, so it's quite possible and certainly desirable to enable NOPASSWD, but only for pre-approved commands. I'd suggest using your configuration management (ansible in this case) to manage your sudoers file so that you can easily update the password-less commands list.
Now, there are some steps you can take once you start scaling to further isolate risk. While we have 1000 or so nodes, not all of them are 'production' servers, some are test environments, etc. Not all admins can access production servers, those than can though use their same SSO user/pass|key as they would elsewhere. But automated users are a bit more secure, for instance an automated tool that non-production admins can access has a user & credentials that cannot be used in production. If you want to launch ansible on all nodes, you'd have to do it in two batches, once for non-production and once for production.
We also use puppet though, since it's an enforcing configuration management tool, so most changes to all environments would get pushed out through it.
Obviously, if that feature request you cited gets reopened/completed, what you're looking to do would be entirely supported. Even then though, security is a process of risk assessment and compromise. If you only have a few nodes that you can remember the passwords for without resorting to a post-it note, separate passwords would be slightly more secure. But for most of us, it's not a feasible option.
From Ansible 1.5 on, it is possible to use an encrypted vault for host_vars and other variables. This does at least enable you to store a per-host (or per-group) ansible_sudo_pass
variable securely. Unfortunately, --ask-vault-pass
will only prompt for a single vault password per ansible invocation, so you are still constrained to a single vault password for all the hosts that you’ll use together.
Nevertheless, for some uses this may be an improvement over having a single sudo password across multiple hosts, as an attacker without access to your encrypted host_vars would still need a separate sudo password for every machine (or machine group) that he or she attacks.
With Ansible 1.5, it is possible to set the ansible_sudo_pass variable using lookup('password', …)
:
ansible_sudo_pass: "{{ lookup('password', 'passwords/' + inventory_hostname) }}"
I find this more convenient than using files in host_vars/
for several reasons:
I actually use
with_password: "passwords/{{ inventory_hostname}} encrypt=sha256_crypt"
to provision the passwords for the deploy remote user (which is then needed for sudo), so they are already present in the files (although doing these plaintext lookups loses the salt value stored in the file when the hashed value is generated).This keeps just the passwords in the file (no
ansible_sudo_pass:
known plaintext) for some epsilon increase in cryptographic security. More significantly, it means that you aren't encrypting all the other host-specific variables, so they can be read without the vault password.Putting the passwords in a separate directory makes it easier to keep the files out of source control, or to use a tool like git-crypt to store them in encrypted form (you can use this with earlier Ansible that lacks the vault feature). I use git-crypt and since I only check out the repository in decrypted form on encrypted filesystems, I don't bother with the vault and thus don't need to type a vault password. (Using both would of course be more secure.)
You can also use the lookup function with ansible_ssh_pass; this may even be possible with earlier versions of Ansible that don't have ansible_sudo_pass.