How can Docker run distros with different kernels?

How can docker run on a Debian host maybe an OpenSUSE in a container? It uses different kernel, with separated modules. Also older Debian versions have used older kernels, so how can run it on a kernel version 3.10+ ? Older kernels have only older built in functions, how can an old distro manage new features? What is "the trick" in it?


Solution 1:

Docker never uses a different kernel: the kernel is always your host kernel.

If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise it won't.

"Containers" Are Just Process Configuration

The key thing to understand is that a Docker container is not a virtual machine: it doesn't create a new virtual computer on which to run the software. Instead, Docker just runs processes in your existing OS, in a similar way to you just starting a process from the command line.

The difference between a containerized process and an ordinary process is the restrictions put on the containerized process and changes to how it sees the environment around it. (These are passed on to any child processes started by the containerized process.) Typical restrictions and changes include:

  • Instead of using the host's root filesystem, mount a different filesystem on / (usually one supplied with the container's image). Parts of the host filesystem may be mounted underneath the new process' root filesystem, e.g. by using docker run -v /u/myprogram-data:/var/data/myprogram so that when the containerized process reads or writes /var/data/myprogram/file this reads/writes /u/myprogram-data/file in the host filesystem.
  • Create a separate process space for the containerized process so that it can see only itself and its children (with ps or similar commands), but cannot see other processes running on the host.
  • Create a separate user namespace so that the users in the container are different from those in the host: e.g., UID 1234 in the containerized process will not be the same as UID 1234 for non-containerized
  • Create a separate set of network interfaces with their own IP addresses, often using a "virtual router" and address translation between those and the host network interfaces. (E.g., the host, when it receives a packet on port 8080, forwards it to port 80 on the container processes' virtual network interface.)

All of this is done by facilities built into the kernel; you can do any of it yourself without Docker if you write a program to do the appropriate setup and set the appropriate parameters when it starts a new process.

Compatibility

So what does "compatible enough" mean? It depends on what requests the program makes of the kernel (system calls) and what features it expects the kernel to support. Some programs make requests that will break things; others don't. For example, on an Ubuntu 18.04 (kernel 4.19) or similar host:

  • docker run centos:7 bash works fine.
  • docker run centos:6 bash fails with exit code 139, meaning it terminated with a segmentation violation signal; this is because the 4.19 kernel doesn't support something that build of bash tried to do.
  • docker run centos:6 ls works fine, because it's not making a request the kernel can't handle, as bash was.

If you try docker run centos:6 bash on an older kernel, say 4.9 or earlier, you'll find it will work fine. (At least as far as I tested it.)

Solution 2:

How can docker run on a Debian host maybe an OpenSUSE in a container

Because the kernel is the same and will support the Docker engine to run all those container images: the host kernel should be 3.10 or more, but its list of system calls is fairly stable.

See "Architecting Containers: Why Understanding User Space vs. Kernel Space Matters":

  1. Applications contain business logic, but rely on system calls.
  2. Once an application is compiled, the set of system calls that an application uses (i.e. relies upon) is embedded in the binary (in higher level languages, this is the interpreter or JVM).
  3. Containers don’t abstract the need for the user space and kernel space to share a common set of system calls.
  4. In a containerized world, this user space is bundled up and shipped around to different hosts, ranging from laptops to production servers.
  5. Over the coming years, this will create challenges.

https://rhelblog.files.wordpress.com/2015/07/user-space-vs-kernel-space-simple-container.png?w=584&h=231

From time to time new system calls are added, and old system calls are deprecated; this should be considered when thinking about the lifecycle of your container infrastructure and the applications that will run within it.

See also "Why kernel version doesn't match Ubuntu version in a Docker container?":

There's no kernel inside a container. Even if you install a kernel, it won't be loaded when the container starts. The very purpose of a container is to isolate processes without the need to run a new kernel.