Why is a 21.10 built binary not compatible with 21.04 install?

Solution 1:

If libc.so.6 from 21.04 is not compatible with libc.so.6 from 21.10, then why isn't the libc on 21.10 called libc.so.7 instead?

libc.so is as core a library as they come. Nearly everything depends on it. One of glibc's goals is to provide backwards compatibility - a program that could run with an older libc.so.6 should (usually) also work fine with a newer release. However, if you bump the soname to libc.so.7 only because you added some new function, then all of these previously-built programs will need a rebuild for no good reason. There hasn't been a truly major break in API for glibc to warrant this yet.

I don't understand why a binary built on 21.10 is not compatible with an 21.04 system.

I don't see anybody guaranteeing forwards compatibility (which was what you expect by 21.04 being able to run something from 21.10) - why would you expect that if you don't take precautions to ensure it?

Solution 2:

According to packages.ubuntu.com, 21.04 uses glibc 2.33, whereas 21.10 uses glibc 2.34, which are not completely compatible.

However, you should be able to build binaries for Ubuntu 21.04 from the source code.

Unless the source is interpreted, you usually need to build binary packages separately for different versions of Ubuntu. Launchpad can automate that for you.

why isn't the libc on 21.10 called libc.so.7 instead?

That is a decision only the developers of glibc can make.

Solution 3:

The term to google for is "glibc symbol versioning".

As this introduction explains, glibc contains multiple versions of each symbol that has changed over time and so libc.so.6 contains all glibc versions from 2.0 through to whatever version it says.

When you link a new library or binary against it, you're using the .h files and exported symbols for the newest versions of the symbols.

As for accessing the older symbols, there's a question over on StackOverflow named How can I link to a specific glibc version?, but because all your other dependencies are likely to be linking against the newest symbols too, it's much easier to just use Docker or a chroot to target older system versions because you'll probably wind up building one from scratch if you don't.

The Python devs actually maintain Docker containers named manylinux... specifically for establishing a reliable baseline for building wheels (redistributable binary packages) for Python packages with compiled components.

I believe the Windows approach is closer to bundling multiple clearly defined profiles and urging all authors of precompiled libraries to offer builds targeting older profiles. (With the caveat that you have to assume that stuff must be freed by the same compilation unit that malloc'd it because PE doesn't have global symbols and different libraries may depend on different versions of the allocator with their own static variables and semantic differences.)