Why do operating systems have an option to shut down? [duplicate]
(source: ytimg.com)
I want to know why operating systems require us to shut down using an option. Why can't I just power off with a mechanical switch?
Will I damage the hardware or corrupt my data if I constantly shut down a computer without using the OS option?
To clarify: I want to use a Intel Compute Stick as a media player connected to a projector, so it will be controlled by an electrical switch. Computer will only have the video running and won't be installed programs. The computer won't have internet access.
Solution 1:
It depends on what is happening with the system at the time you chose to suddenly cut the power. If the system is busy writing important data and you cut the power, you could potentially damage data, or corrupt the OS. A lot of things are going on that you don't really see. You mostly have to worry about something software related breaking when you do a hard reset. As far as the hardware, you shouldn't encounter any problems. Knock on wood
You do not want to make a habit bypassing the shutdown method and cutting the power. It would just be a matter of time until something becomes corrupt and causes you to have to reinstall the OS.
In some cases, a hard reset is the only choice that you have. If your computer locks up and you can't get it to do anything what other option is there?
The origin of this screen is from Windows 9X Systems, where the message is displayed when Windows has successfully shut down to MS-DOS but is not configured to return to the prompt (COMMAND.COM) again. On systems with proper ACPI support and ATX power supply, the PC may power down instead.
In any situation where you're going to do a hard reset, be sure to shout "I am the lord of electricity!!! at your PC. Show it who's boss.
Solution 2:
Computers are designed to be fast. That may include cheating. For instance, when a computer is supposed to write data, computers might store data in RAM instead of writing it to a hard drive. This is done because RAM is much faster.
Shutting down tells the computer to flush the buffers, meaning to stop storing stuff in RAM, and prepare for the system to lose electrical power. If you actually lose electrical power before making sure that all such data is properly written, you may lose data.
Part of the details of the filesystem volume's structure are handled by the operating system's filesystem-handling code (sometimes called the filesystem driver). Often, the filesystems use tables. (Visually, you can think of these like charts.) Imagine if you were writing out a multiplication table, and it said:2 4 6 8 10 12 14 1
First of all, every column has just one row. Multiplication problems are supposed to have three parts (two numbers that are multiplied (the multiplicand and multiplier) and an answer (the product)). Here all we are seeing is a bunch of single numbers, so we don't even have one full example of a multiplication product. What we have is useless.
Second, what we have is actually worse than useless. Sometimes, the only thing worse than missing information (causing you to make no progress while you figure out the information you need) is trusted misleading information that causes you to spend resources to proceed in a useless, bad direction. In this case, you have invalid data towards the end: a one instead of a 16. (The idea here is that the chart stopped being updated suddenly, before the entire number "16" was noted properly.) If you don't let the computer complete its charts correctly, then that can cause confusion. (If the computer is told to update some data, and it should write to position number sixteen... you don't want it to write to position number one!) The computer relies on tables quite similar in nature to this. Shutting down tells the operating system to try to wrap things up neatly, instead of leaving jobs half-finished.
Another example is virtual memory. If a computer runs out of RAM, it can use space on a hard drive to keep track of details. For instance, maybe you have a fifty page document in a word processor. The computer is keeping track of the fact that the word processor is open, and keeps track of the first 12 pages, but the remaining 38 pages are stored on the hard drive, in what is called "virtual memory". When you shut down, the computer will go through the entire process of shutting down programs, which will free up some RAM, and eventually use the "virtual memory" to properly handle the word processor. If you simply lost electrical power, then the word processor stops running (because everything stops). Then, when the computer starts up, it sees the virtual memory has the data from the 38 pages of the document that was opened in a word processor. The computer doesn't even know that the data was being used by a word processor. Shutting down allows such things to be taken care of while the computer is able to keep track of these details.
Compared to Windows 95, MS-DOS was more resilient (less prone to problems) when it came to sudden power outages. (Some of that increased vulnerability was because of Windows 95's abilities to multitask and handle virtual memory.) So the computer's susceptibility to invalid shutdowns, or even if an official shutdown procedure is required at all, is based on which operating system is being used. Most modern operating systems are designed to rely on requiring a proper shutdown, because operating systems are easier to design with such a requirement. There's no reason that has to be the case, and in fact, some operating systems nanobsd do allow a person to just power it off. As one example, a page on nanobsd says "Everything is read-only at run-time — It is safe to pull the power-plug." As another example, resflash's home page has a bullet point saying "Power can be safely lost at any time." So there is no reason why absolutely shutdowns have to be a requirement that operating system designers impose when they create the design of an operating system. Shutting down is just simply a requirement that is quite common.
Solution 3:
In the days of MS-DOS, killing power to the computer would generally cause the loss of any information which was held in RAM but not stored on disk, but would not affect information stored on disk. The act of storing information on disk, however, will often render the old information unreadable at least slightly before the new version is readable. Loss of power between the time the old version is destroyed and the new version of written would leave one without any version of the information. If the information in question is something like a directory structure, that could large areas of the disk essentially inaccessible.
If one is using software which write information to disk only when explicitly asked to do so, then provided one doesn't kill power immediately after asking the system to write to disk one shouldn't accidentally clobber any information on disk. Modern systems, however, often have one or more tasks that may start writing information to disk at times the user doesn't necessarily expect. If the system happened to start writing some information just before the user killed power, that could result in disk corruption and data loss.
Part of the purpose of selecting "shutdown" is to eliminate the possibility of the system spontaneously starting any actions that write data to disk just as the user is about to kill power. Any actions which don't get triggered before the "You may now shut down your computer" message is shown can't get triggered until after the system is restarted, so there's no danger of something happening just as the user pulls the plug.
Solution 4:
There are two main reasons that computer systems need an orderly shutdown:
Application state
Many applications have state that must be written to permanent storage. The obvious example is a database server, but even read-mostly applications such as Web or NTP servers may write logs or statistics which may be unintelligible if a write is interrupted.
It may be possible to alleviate this problem if the applications in question don't read or write files directly, but perform these operations via a transactional mechanism such as writing to a relational database.
Filesystem structure
As the operating system writes files on behalf of the applications, writes may be buffered until the disks catch up, meaning that applications' writes don't necessarily complete until quite some time afterwards. Power saving mechanisms tend to increase the delay here, so you have a trade-off between energy consumption and data safety.
Whilst data are being written to disk, there are points where the filesystem data are inconsistent. Modern filesystem implementations take care to minimise the periods, but they can't be eliminated entirely. For example, when a block is taken from the free list, there is a short window where it is neither allocated nor free. This consistency problem is why after an unclean shutdown, an OS will need to perform a filesystem check on the next boot, to examine all blocks and ensure they are correctly accounted for.
Journalling filesystems alleviate this to some extent, by recording intended changes into a log before actually performing them. Then the filesystem check can run much faster, by replaying all the complete log entries and discarding incomplete ones.
Filesystem consistency issues can be avoided by not having local disks, and NFS-mounting the root filesystem, but the loss of cached writes is still a problem for these systems. The only systems I'm willing to hard power-off without shutdown are those that have the disks mounted read-only (mostly embedded systems such as my Empeg Car music player, but also a couple of disk less web-browsing terminals I have lying around for visitors).
TL;DR
Data writes to permanent storage must be completed before power-off. If you have no writeable storage, then removing the power is low risk.