Before Operating systems, What concept was used to make them work? [closed]

The Operating systems have been tightly related to the computer architecture.An OS takes care for all input and output in a computer system. It manages users, processes, memory management, printing, telecommunication, networking etc.It sends data to a disk, the printer, the screen and other peripherals connected to the computer.

Prior to the introduction of Operating System,

What was used in computer systems to make them work?

Which concept was used to operating system in our evolution of computer?


Solution 1:

Early computers ran one program at a time.

Programs were directly loaded from (for example) paper tape with holes punched in it.

You'd program the earliest computers by setting a large set of on-off switches.

Colossus:

Atlas:

Manchester:


I am using the word "Computer" to mean the sort of device that exists nowadays in the billions. Of this vast number of computers, all but an insignificantly tiny number are digital electronic programmable computers with stored programs. I'm sure the original question is not about how people with the job title "Computer" spent their working day. In between those two types of computer, there is a progression of interesting devices not covered in this answer.

Solution 2:

Source: History of Operating Systems

Operating systems have evolved through a number of distinct phases or generations which correspond roughly to the decades.

The 1940's - First Generations

The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that programs were often entered one bit at time on rows of mechanical switches (plug boards). Programming languages were unknown (not even assembly languages). Operating systems were unheard of.

The 1950's - Second Generation

By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The General Motors Research Laboratories implemented the first operating systems in early 1950's for their IBM 701. The system of the 50's generally ran one job at a time. These were called single-stream batch processing systems because programs and data were submitted in groups or batches.

Source: http://en.wikipedia.org/wiki/History_of_operating_systems

The earliest computers were mainframes that lacked any form of operating system.

Each user had sole use of the machine for a scheduled period of time and would arrive at the computer with program and data, often on punched paper cards and magnetic or paper tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed.

Programs could generally be debugged via a control panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.

Solution 3:

Going right back to the start of computer systems you didn't have single computer systems, instead you had mainframes.

enter image description here

These mainframes would run on punched cards which would contain your program (and often your data). People would get allocated time on these systems, bring their cards along and plug them into the machines for them to process. The machine would run the program until it finished then the next user would come along with their tape and cards.

Basically that is how it worked.

Solution 4:

1890-1950 - Operation inherent to the system The very earliest computers had the equivalent of what an OS does now built into them. You (the operator) were also part of the operating system as well. You flipped the register switches (or used a punch card) and physically swapped bus wires (think of the old fashion telephone operator station) and memory was linked (via physical wires) directly with light bulb (the monitor of the day) and printers (the long term storage) in such a way that the program output would light up and print directly to the device as it was being placed into the output memory buffer. There was no driver needed for these things because (due to the way those physical wires were ran) they 'just worked' (there was also no such thing as a monitor in these days. In fact it was still going to be a few decades into this era until a digital numeric display would be invented so that you could actually see the numbers you'd already entered into the register and the output as decimal numbers; printers ruled this entire era until monitors . They were wired exactly as they needed to be to work correctly. None of this part really changed much with the switch from mechanical (1890s) to electric analogue (1910s) to digital (1930s). This 'Plug N play' architecture was replaced with the interrupt system during this time and would not resurface again until the late nineties; of course by then there'd be a lot less plugging. With interrupts, devices were allowed to take CPU time which allowed architectures which weren't directly tied to hardware but it took several generations for this to really be the streamlined process we see in x86 arch (and newer); early systems often ran into horrible race conditions, hardware compatibility\delay problems, and other odd behaviours where interrupts are concerned. Because each machine used radically different (an experimental) architectures in this period; nearly all devices were custom made for the machine they worked on.

1950-1973 - Operation within a system This era saw the advent of most of the features we think of when we talk about a true operating system. Debugging, programming languages, multi users, multi tasking, terminals, disk type drives, networking, standardization of components, etc were all introduced in this era. This time saw a giant leap towards standardization of much of this which meant that we had more standardized devices but still each OS was hand crafted for each machine which meant that OS functionality was severely limited by whatever the engineers who designed that particular system decided they needed. During this time, there was a substantial grey area in what an operating system was because the different architectures handle thing much differently and a more general purpose machine will need a lot more OS than a machine which includes hardware to handle the same jobs. The fact is that hardware is always going to be faster than software and practically anything done in software can theoretically be done in hardware (it is cost\flexibility\size\time\etc which limits us from making almost pure hardware versions of everything to this day). An OS was made for a particular computer or type of computer; it would not work elsewhere. Each new computer design needed all low level OS software to be rewritten from scratch to work with a particular machine model. Near the end of this time a new OS emerged which would soon change this paradigm, UNIX written at Bell Labs by Ken Thompson and Dennis Ritchie.

1973 - Operation between systems A single program changed all of this but it wasn't UNIX. It was the C compiler (which was famously made in a garage by Ken Thompson and Dennis Ritchie after Bell Labs cut it). Until this point, any time you wrote code it was either machine code (code that the machine directly understands but is not portable) or it was written in a language which compiled your code to byte code (code which is interpreted by another program as it runs). The huge difference for OSes that C brought with it was the ability to do what is known as cross compiling into machine code. This meant that code could be written once and compiled to run across many different machine types natively as long as a compiler had been written for that machine. Operating systems must be written in machine code because machine code is literally the only code that the machine knows.

I would say that it wasn't until Ken and Dennis first compiled the UNIX kernel using a C compiler that a true OS in the modern sense was born. Before that, an OS was either a physical object or simply a pre initialized chunk of memory space designed specifically for a particular machine. Adding new devices to the system literally required that 'kernel' code to be rewritten. Now, the UNIX OS that they had designed for a particular machine could be recompiled and ran on other machines without rewriting EVERYTHING (as long as that machine was able to compile a C compiler from a bootstrap environment the rest of the OS could be written in the relatively high level C code).

Solution 5:

In the beginning, the programs were hardwired into the computer, which would start running the program immediately from a particular location on bootup.

Then various forms of offline storage were invented: punched cards, tape, drums, even disks. Much more flexible. But not directly accessible from the CPU. The program needs to be loaded into memory before it can be run. So you write a program to load your program. This is known as a loader, or bootstrap (from the expression "to pull yourself up by your boot straps").

As the system gets more complicated, you may have a simple loader load a more complex loader. This started with microcomputers: the normal tape loader was slow, so load a decompressor and fastload the rest of the tape. Or disk speedloaders which doubled as copy protection systems by doing nonstandard things with the disk.

Or the pre-UEFI PC boot process: processor starts executing in the BIOS. This loads the first sector off the disk and jumps to it. That looks for an active partition and loads a bootloader from there, which loads the operating system. Originally that would have been COMMAND.COM for MSDOS; now it's usually NTLDR.EXE for Windows.