Why are the operating system and kernel treated separately in Linux? [closed]
The whole GNU/Linux system is built up using the modular approach. You can mostly upgrade (replace in general) a single module without touching others. The module in question can be a bootloader, kernel, shell, command, desktop environment, GUI application, whatever…
Of course, it is true as long as you are able to manage dependencies correctly. In the set of distributions around Ubuntu, APT is used to resolve dependencies automatically.
You can install another kernel version using the command:
sudo apt install linux-image-<version>
As long as APT allows it, you should be able to reboot and use selected version of the kernel, be it generic, lowlatency etc. Or you build a kernel version yourself, e.g. the Real-Time Linux, and use it with your current system.
As you know Kernel is an important part of OS, in GNU/Linux distributions you can easily update the kernel without touching other part of the OS. However we are simply updating a part of our OS.
An operating system is made of two part, kernel space and user space.
So yes, you can update your kernel space without touching your user space if only the new version is compatible with your current user space.
And about updating user space tools, it's another yes.
When you run:
sudo apt-get upgrade
If there was an update available for kernel you will get:
The following packages have been kept back:
linux-generic linux-headers-generic linux-image-generic
so you are only updating your user space and when you run something like
sudo apt-get dist-upgrade
you are updating everything including the kernel.
To upgrade only your Kernel to a newer version use something like:
$ apt-cache search "linux-image-[0-9]+.*-generic" | awk '{print $1}' | head -4
linux-image-4.4.0-21-generic
linux-image-4.10.0-14-generic
linux-image-4.10.0-19-generic
linux-image-4.10.0-20-generic
to find a list of newer kernels, then install it as a new package, for example:
sudo apt install linux-image-4.10.0-14-generic
First, a few clarifications, because i sense you don't understand how GNU/Linux systems came into existence. Bear with me if this is nothing new for you:
"Kernel" is not just another program that runs, but it is the part of the OS providing the base functions: if you want to start a program (say, you type "ls" at the command line) the binary has to be loaded from disk (that includes some filesystem operations to locate it and some file handling to´read it), then a "process environment" is created: memory gets assigned, a process number is issued, etc., etc.. All the former activities (FS, reading from file, ...) are handled by system libraries, but the latter ones are kernel functions. In some sense the kernel "is the OS" and everything else is just decoration around it.
"Linux" is in fact (only!) a kernel with no other parts of an OS around. Linus Torvalds started writing it by taking Andrew Tanenbaums MINIX template OS kernel and completing it so that it was a fullblown and real workable kernel. To this day there is Linus (and many others who contribute/have contributed) who develop this kernel. This kernel is still very similar to UNIX, but NOT a UNIX kernel.
"GNU" started as an initiative to "make better" many common UNIX commands. I won't discuss if they succeeded or not, but they definitely wrote a lot of software and at one point had a collection of utility programs. They even started to develop a OS kernel of their own (HURD), which was based largely on UNIX, but was definitely different. But to this day HURD is in its early development and hardly a working solution. "GNU" btw. is short for "GNU (is) Not UNIX" - they tried to overcome some (perceived or real) limitations of UNIX with the intention of creating a successor to UNIX (again: i don't want to enter the discussion if they succeeded or not - i don't care if it is "better" or "worse", but it is definitely different!).
So, with a set of tools lacking a kernel and a kernel lacking a toolset it was a natural development to put these two together: GNU/Linux was created.
Still, to have working (and workable) OS you need more than just a kernel and a toolset: you need a package managing system, you need installation procedures, you need template configurations, you need ....
Several different people (or groups thereof) came to this conclusion and used the GNU/Linux combination to create a GNU/Linux-system of their liking, by adding exactly the things i spoke about above: they created a package manager, a packaging system, installation procedures and what more. These different groups (respectively the results of their efforts) are what the different distributions are. Today there are three different package managers in place (apt for Debian and derived systems like *ubuntu, rpm for RedHat and derived systems like Fedora, CentOS and more, pacman for ArchLinux) but all these just manage packages of software which is (essentially) the same: what is called when you issue "ls" or "df", etc., on a Debian-system or a RHEL-system comes from different packages but essentially it is the GNU-version of the "ls"-("df"-)program, just differently packaged.
So, "in principle" you can update the kernel alone, like the people who created a distribution from various versions of all the software i spoke above did.
But, and this is a real big BUT: because there is not only the kernel and some additional software but a lot of other things to keep in mind, like system configuration tools (systemd, which some distributions use and some do not), network management tools like NetworkManager, which in turn depends on some versions of the GNOME-library, etc., etc. - a "distribution" is a rather complex thing and chances are if you try to update the kernel you end up updating a lot of other things because of the many interdependencies.
Still, and also "in principle", like above: you can also create your own distribution by downloading all the sources, compiling them, find a working set of version combinations, put some packaging system into place (or use one of the existing ones) - and so on, until you have a distributable, installable and configurable system. This is what the creators of distributions like Ubuntu do and its not a miracle - just a lot of complex work, so in reality most users shy away from that and use something they can by ready to use.
I hope this answers your question.