What is it that makes Windows require constant rebooting while Linux does not? [duplicate]

Solution 1:

It's a consequence of engineering decisions.

On a Linux system, it's possible to delete a file that's still in use; the file system uses what's essentially a form of reference counting, and having the file open is simply another reference to it. When you close it, the file will be cleaned up. As a consequence of this, it's possible to replace core OS code and data files without needing to shut them down and restart them (aka reboot).

On Windows, opening a file locks it in the file system, and it can't be deleted. This means that currently-running code can't be updated without a reboot. But this also means that you can always know exactly what version is running on your system; under the Linux model, it's possible to receive an important system software update, successfully apply it, and still not have it operational on your system, because the old, un-updated version is still running.

It's an engineering tradeoff, like most things in computing.

Solution 2:

It's a consequence of view of predicted user expectations.

Linux systems are modeled after unix run on servers. Uptime was a bragging point in these communities. Anything that reduced up time was bad. And this is a side effect of the expectation that the computer had multiple users and that scheduling downtime had to be coordinated with multiple users.

Windows was designed for the pc market. At the time that it was introduced the knowing that you could quit one program and start another without rebooting was the sign of an experienced computer user. Because of this there was no reason to not use filenames as the primary identifier when designing ntfs.