System updates for many servers
We have many servers and still want to update them all. The actual way is that any of the sysadmins go from server to server and make a aptitude update && aptitude upgrade
- it's still not cool.
I am searching now for a solution which is still better and very smart. Can puppet do this job? How do you do it?
Solution 1:
You can use the exec
type such as:
exec { "upgrade_packages":
command => "apt-get upgrade -q=2",
path => "/usr/local/bin/:/bin/:/usr/bin/",
# path => [ "/usr/local/bin/", "/bin/" ], # alternative syntax
}
To be honest, I did not try it myself, but I think you just need to create a new module that include such an exec definition.
The apt-get upgrade
command is interactive. To make it run quietly, you can add the option -q=2
as shown above.
Solution 2:
if all your hosts are debian, you can try the unattended-upgrades package.
http://packages.debian.org/sid/unattended-upgrades
Here we have been using puppet to manage our debian virtual machines, with puppet we are able to enable and manage unnatended-upgrade configs on all servers.
Recently our team are testing the mcollective tool to run commands on all servers, but to use mcollective ruby skills are needed.
[s] Guto
Solution 3:
I would recommend going for Puppet, facter and mCollective.
mCollective is a very nice framework where you can run commands over a series of hosts (in parallels) using facter as filter.
Add to that a local proxy / cache and you'd be well set for servers management.
Solution 4:
Use a tool that is made to run a single command on multiple servers. And by that I do not mean having a kazillion terminals open with Terminator or ClusterSSH, but instead having a single terminal to a management server running a tool suitable for the job.
I would recommend func, Salt or mCollective in this context. If you already have Puppet, go for mCollective (it integrates nicely in Puppet). If you don't, and you have an old Python on your machines, you might enjoy func. If you Python in new, try Salt. All these tools run the command specified at the command line asynchronously, which is a lot more fun than a sequential ssh loop or even doing the same aptitude commands in umpteen Terminator windows to umpteen servers.
You'll definitely love Salt.
Solution 5:
So I guess there are many things which contribute to a good solution:
- Bandwidth
- Ease of administration
- Detailed logging in case something screws up.
Bandwidth: Basically two alternatives to save bandwidth come into my mind:
- Setting up a Debian mirror and configure all your clients to use this mirror, see http://www.debian.org/mirror/ for more details. (I would recommend this)
- Setting up a proxy (apt-cacher, apt-proxy or Squid) and increase cache so all your clients can profit from this cache
Administration: I would configure a parallel shell like PDSH,PSSH,GNU Parallel and issue the command on all clients, if I tested the command previously on an example machine. Then its not very likely that it may fail on all the others. Alternatively you may consider a cron job on all clients, but then it may fail automatically, so I would prefer the first solution.
If you concern about simultaneity of upgrades you could schedule your commands with at
Logging: As with parallel shells you have the possibility to redirect output I would combine stderr and stdout and write it to a logfile.