How often should I reboot Linux servers?

You must reboot after a kernel update (unless you are using KSplice), anything else is optional. Personally I reboot on a monthly cycle during a maintenance window to make sure the server and all services come back as expected. This way I can be reasonably certain if I have to do an out of schedule reboot (i.e. critical kernel update) that the system will come back up properly. Automated monitoring of servers and services (i.e. Nagios) also goes a long way to helping this process (reboot, watch the lights go red and then hopefully all back to green).

P.S. if you do reboot regularily you'll want to make sure you tune your fsck checks (i.e. maximal mount count between checks appropriately, otherwise a quick 2 minute reboot might take 30 minutes if the server starts fsck'ing a couple terabytes of data. I typically set my mount count to 0 (tune2fs -c 0) and the interval between checks to 6 months or so and then manually force an fsck every once in a while and reset the count.


I actually reboot my servers on a fairly regular basis, any time major configuration changes are made. It's important to know that in the event of an emergency the server software will come up without a hassle. The last thing you want is to be in a position where you are trying to recover from an outage but are having to mess with your server configuration because you didn't thoroughly test it when you set it up.


Linux servers never need to be rebooted unless you absolutely need to change the running kernel version. Most problems can be solved by changing a configuration file and restarting a service with an init script.

You need to watch out for reboots... if you changed anything "on the fly" without reflecting your changes in a service's configuration file, those changes will not be applied after a reboot.

I usually reboot after scheduled system updates, though. It's generally not necessary, but I do them when nobody's in the office, so why not? There are often kernel upgrades when I get to doing the update, anyway.


Not really required, linux memory handling is excellent. But if you're having uptimes of that length you're probably running kernels that have known vulnerabilities - you might want to watch that.


I think you should reboot if there has been a recent kernel update OR a libc update. A lot of things are linked with libc and it's not really possible to unload that lib from memory completely and replace it with the new version unless you do a reboot.

For example, even basic things like /bin/ls and other things in /bin use libc. If you are just running a console and using bash, you are using libc.

$ ldd /bin/bash
        linux-gate.so.1 =>  (0xffffe000)
        libtermcap.so.2 => /lib/libtermcap.so.2 (0xb8029000)
        libdl.so.2 => /lib/libdl.so.2 (0xb8025000)
        libc.so.6 => /lib/libc.so.6 (0xb7ed9000)
        /lib/ld-linux.so.2 (0xb804b000)

$ ldd /bin/ls
        linux-gate.so.1 =>  (0xffffe000)
        librt.so.1 => /lib/librt.so.1 (0xb7f3a000)
        libacl.so.1 => /lib/libacl.so.1 (0xb7f33000)
        libc.so.6 => /lib/libc.so.6 (0xb7de7000)
        libpthread.so.0 => /lib/libpthread.so.0 (0xb7dd0000)
        /lib/ld-linux.so.2 (0xb7f61000)
        libattr.so.1 => /lib/libattr.so.1 (0xb7dcc000)

And yes, if you change files in /etc/init.d which affect startup in some way, I would recommend a reboot. You don't want to find out that you made a small mistake in a startup file when you need things up and running again quickly.

If a server has gone many days without a reboot it actually means that there is no way to be sure that it will come up again properly. Once again this is because a lot of config files might have been changed on it, and no one has rebooted it for a long time to make sure it comes up. Also, if the server has a lot of updates due and you haven't rebooted for a long time, reboot before you apply the updates, otherwise if there is a problem, you can't be sure it was caused by a configuration error a long time ago or the new updates you applied.

Lastly, if you reboot a critical server after a very long time, the fsck might mean you have to wait a very long time now for it to come back up. You can use tune2fs to avoid this, but it's a good idea to check it regularly I suppose. This is why you shouldn't be in a position where you are dependent on just 1 server and if that goes, your whole website is gone. You should have another one on standby.