Why is "Everything is a file" unique to the Unix operating systems?

I often hear people say "Unix's unique philosophy is that it treats everything as a file" or "In Unix, everything is a file". But I've never heard anyone explain why it is unique to Unix.

So, why is this unique to Unix? Does other operating systems such as Windows and Macs not operate on files?

And, is it unique compared to other operating systems?


Solution 1:

So, why is this unique to Unix?

Typical operating systems, prior to Unix, treated files one way and treated each peripheral device according to the characteristics of that device. That is, if the output of a program was written to a file on disk, that was the only place the output could go; you could not send it to the printer or the tape drive. Each program had to be aware of each device used for input and output, and have command options to deal with alternate I/O devices.

Unix treats all devices as files, but with special attributes. To simplify programs, standard input and standard output are the default input and output devices of a program. So program output normally intended for the console screen could go anywhere, to a disk file or a printer or a serial port. This is called I/O redirection.

Does other operating systems such as Windows and Macs not operate on files?

Of course all modern OSes support various filesystems and can "operate on files", but the distinction is how are devices handled? Don't know about Mac, but Windows does offer some I/O redirection.

And, compared to what other operating systems is it unique?

Not really any more. Linux has the same feature. Of course, if an OS adopts I/O redirection, then it tends to use other Unix features and ends up Unix-like in the end.

Solution 2:

The idea that "everything is a file" came from Multics. The designers of Unix based a lot of their work on predecessors, especially Multics. Indeed, a lot of things in computing are based on predecessors.

You can read up on the late Dennis Ritchie's work in the design of Unix for more. He referenced things that they "copied" from Multics, such as the tree-like file system, the command shell, and non-structuring of files. I'm not implying that the Unix folks stole from the Multics folks. For all intents and purposes, it was the same folks.

Solution 3:

Unique? No. Defining? Absolutely.

Having everything as a file or a device in a known hierarchy means you can use the same set of tools for everything. Plan 9 from Bell Labs takes this further with even hardware devices as files.

More importantly, this allows for two very simple and powerful concepts. Basic utilities that do One Thing Well (tm), which can be strung together with pipes as needed. Want to find something in a text file? Use cat to show it, pass it through grep, and you're cooking with gas. That's the real power of the 'Unix' way - specialised applications working together for massive amounts of flexibility.

Mac OS X also follows the Unix philosophy, but it's better hidden (an 'application' bundle is really a directory full of files), and in fact is a proper, certified Unix, descended from NeXT, which used bits of FreeBSD.

With Windows, there are some binary components such as event viewer and registry since, and there's some speed advantages there, in that particular scenario.

Solution 4:

Because of the special files. When people say "everything is a file in Unix", common files and directories are not what they have in mind. Special files are unique to Unix-like OSes, of which there are many. So it is not unique to the Unix.

Special files serve many purposes. There are e.g. pipes, sockets and, most notably, device files. Pipes and sockets are communication streams between processes. Much of the functionality of the subsystems is made available to the user space through device files.

Pipes and Sockets

Programs use them just as they'd use ordinary files. In fact, most of the time they don't even care what type of file they use. That's why Unix commands can be so manifoldly combined to form powerful new systems. (See I/O redirection in sawdust's answer)

Device Files

As previously mentioned these act like interfaces for the user space. For example, in order to eject the cd tray, a programmer would at first open the corresponding device file. Another example: you want your program switch the virtual terminal. Open /dev/console first.

What happens next is not sending mere characters to those files, but issuing ioctl()'s on them. The individual ioctl's you can issue depend on the device. E.g. the console is documented in console_ioctl(4)

Solution 5:

I am probably going to get reamed for saying this, but I think that saying that everything is a file in Unix is in fact a fallacy. What it is really is two things.

  1. Files and devices (and a lot of other stuff) are objects that can be modeled by an interface that comprises of open, close, read, write, and control (ioctl) functions.
  2. The namespace for these objects is hierarchical i.e. these objects are organized in a hierarchy.

A filesystem implements this namespace, and implements the framework that allows the dispatch of interface functions to these objects. A filesystem was first conceptualized to house files, but was then co-opted to organize other objects in the namespace hierarchy. An example of polymorphism from before object oriented was a thing.

There is no harm in just calling everything files. But in reality, they are these more generic objects (a file being one such object). From this perspective, this idea is not unique to Unix at all. A lot of other OSes implement such hierarchies of polymorphic objects.