Multiple Linux sysadmins working as root
In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything.
Everybody's SSH public key is put into ~root/.ssh/authorized_keys2
- Advantage: easy to use, SSH agent forwarding works easily, little overhead
- Disadvantage: missing auditing (you never know which "root" made a change), accidents are more likely
Using personalized accounts and sudo
That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files.
- Advantage: good auditing, sudo prevents us from doing idiotic things too easily
- Disadvantage: SSH agent forwarding breaks, it's a hassle because barely anything can be done as non-root
Using multiple UID 0 users
This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.
- Advantage: SSH agent forwarding works, auditing might work (untested), no sudo hassle
- Disadvantage: feels pretty dirty - couldn't find it documented anywhere as an allowed way
What would you suggest?
Solution 1:
The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it.
How does agent forwarding break exactly?
Also, if it's such a hassle using sudo
in front of every task you can invoke a sudo shell with sudo -s
or switch to a root shell with sudo su -
Solution 2:
With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX
options as recommended by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested if you could update the post with any issues (or sucesses) that arise...)
I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2", is that unless you absolutely are never going to work on any other systems;
-
then at least some of the time, you are going to have to work with
user accounts and
sudo
The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and EAL, then you are going to have to work around the issues of sudo because;
- It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically using some centralized user database.
So; Using personalized accounts and sudo
It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo
Hence I can pass on some tricks that I use to work-around the annoyances of sudo
that you mention. The first problem is that if root login is blocked using PermitRootLogin=no
or that you do not have the root using ssh key, then it makes SCP files something of a PITA.
Problem 1: You want to scp files from the remote side, but they require root access, however you cannot login to the remote box as root directly.
Boring Solution: copy the files to home directory, chown, and scp down.
ssh userXXX@remotesystem
, sudo su -
etc, cp /etc/somefiles
to /home/userXXX/somefiles
, chown -R userXXX /home/userXXX/somefiles
, use scp to retrieve files from remote.
Very boring indeed.
Less Boring Solution: sftp supports the -s sftp_server
flag, hence you can do something like the following (if you have configured password-less sudo in /etc/sudoers
);
sftp -s '/usr/bin/sudo /usr/libexec/openssh/sftp-server' \
userXXX@remotehost:/etc/resolv.conf
(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)
If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more less boring file transfer method, to access remote root files.
Port Forward Ninja Method:
Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024) is to be forwarded back to port 22 on the local side.
[localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
Get root in the normal fashion...
-bash-3.2$ sudo su -
[root@remotehost ~]#
Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;
[root@remotehost ~]# scp -o NoHostAuthenticationForLocalhost=yes \
-P3022 /etc/resolv.conf localuser@localhost:~
localuser@localhost's password:
resolv.conf 100%
[root@remotehost ~]#
Problem 2: SSH agent forwarding: If you load the root profile, e.g. by specifying a login shell, the necessary environment variables for SSH agent forwarding such as SSH_AUTH_SOCK
are reset, hence SSH agent forwarding is "broken" under sudo su -
.
Half baked answer:
Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME
This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack, but is useful when you need to SCP files from the remote host as root, to some other remote host.
Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;
Defaults:userXXX !env_reset
this allows you to create nasty hybrid login environments like so;
login as normal;
[localuser@localmachine ~]$ ssh userXXX@remotehost
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971
create a bash shell, that runs /root/.profile
and /root/.bashrc
. but preserves SSH_AUTH_SOCK
-bash-3.2$ sudo -E bash -l
So this shell has root permissions, and root $PATH
(but a borked home directory...)
bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin
But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;
bash-3.2# scp /root/.ssh/authorized_keys ssh-agent-user@some-other-remote-host:~
/root/.ssh/authorized_keys 100% 126 0.1KB/s 00:00
bash-3.2#