Automated "yum update" to keep server secure - pros and cons?
Solution 1:
It Depends
In my experience with CentOS its pretty safe since you're only using the CentOS base repositories.
Should you expect failed updates once in a while... yes... on the same level that you should expect a failed hard drive or a failed CPU once in a while. You can never have too many backups. :-)
The nice thing about automated updates is you get patched (and therefore more secure) faster than doing it manually.
Manual patches always seem to get pushed off or regarded as "low priority" to so many other things so if you're going to go the manual mode SCHEDULE TIME ON YOUR CALENDAR to do it.
I've configured many machines to do auto yum udpates (via cron job) and have rarely had an issue. In fact, I don't recall ever having an issue with the BASE repositories. Every problem I can think of (off the top of my head, in my experience) has always been a 3rd party situation.
That being said... I do have several machines that I MANUALLY do the updates for. Things like database servers and other EXTREMELY critical systems I like to have a "hands on" approach.
The way I personally figured it out was like this... I think of the "what if" scenario and then try to think of how long it would take to either rebuild or restore from a backup and what (if anything) would be lost.
In the case of multiple web servers... or servers who's content doesn't change much... I go ahead and do auto-update because the amount of time to rebuild/restore is minimal.
In the case of critical database servers, etc... I schedule time once a week to look them over and manually patch them... because the time taken to rebuild/restore is more time consuming.
Depending on what servers YOU have in your network and how your backup/recovery plan is implemented your decisions may be different.
Hope this helps.
Solution 2:
Pro: Your server's always at the latest patch level, usually even against 0-day exploits.
Con: Any code running on your server that uses features removed in later versions, any configuration files that change syntax, and any new security "features" that prevent execution of code that can be exploited can cause things running on that server to break without you knowing about it until someone calls you with a problem.
Best practice: Have the server send you an email when it needs to be updated. Back up or know how to roll-back updates.
Solution 3:
On top of what most people said here, I'd highly recommend to sign up to the centos mailing list, they always post emails about patches and their priorities right before they push them to the repositories. It's useful to know in advance what's packages need to be upgraded.
My setup is allowing yum to automatically update the system once a day, I make yum send me a mail with the packages installed or upgraded right after. I also receive mail when yum has a conflict and need manual intervention (every 4 hours).
Until now, everything is running smoothly (for over 4 years now), the only time I got caught offguard was when yum upgraded the regular kernel (I virtualized my server) and changed the grub and pushed the regular kernel as the default, 2 weeks later during maintenance my system got rebooted and all of my virtual servers were gone for a few minutes until I had to manually intervene.
Other than that, I didn't really had any problems.