Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's really just glibc




It's really just not. GTK is on its fourth major version. Wayland broke backwards compatibility with tons of apps.

Multiple versions of GTK or QT can coexist on the same system. GTK2 is still packaged on most distros, I think for example GIMP only switched to GTK3 last year or so.

GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3).

Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.


The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users, because of NSS modules.

Static linking itself doesn't prevent modules. There's https://github.com/pikhq/musl-nscd for example

Not inherently, but static linking to glibc will not get you there without substantial additional effort, and static linking to a non-glibc C library will by default get you an absence of NSS.

Can't we just freeze glibc, at least from an API version perspective?

The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs.

glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.


Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols.

It should not be necessary to freeze it. glibc is already extremely backwards compatible. The problem is people distributing programs that request the newest version even though they do not really require it, and this then fails on systems having an older version. At least this is my understanding.

The actual practical problem is not glibc but the constant GUI / desktop API changes.


Making an executable “request” older symbol versions is incredibly painful in practice. Basically every substantial piece of binary software either compiles against an ancient Debian sysroot (that has to have workarounds for the ancient part) or somehow uses a separate glibc copy from the base system (Flatpak, etc.). The first greatly complicates building the software, the second is recreating Windows’ infamous DLL hell.

Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.


Is it painful? Why? You need a build environment that has the old libraries. It does not have to be ancient, just exactly what you need.

It has to be as ancient as the oldest glibc you want to support, usually a Red Hat release with very old version and manual security backports. These can have nearly decade-old glibc versions, especially if you care about extended support contracts.

You generally have difficulty actually running contemporary build tools on such a thing, so the workaround is to use —-sysroot against what is basically a chroot of the old distro, as if cross-compiling. But there are still workarounds needed if the version is old enough. Chrome has a shorter support window than some Linux binaries, but you can see the gymnastics they do to create their sysroot in some Python scripts in the chromium repo.

On Windows, you install the latest SDK and pass a target version flag when setting up the compiler environment. That’s it. macOS is similar.


In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist:

  patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
  patchelf --set-rpath /lib "$APP"

> [...] brave or foolhardy, [...]

Heed the above warning as down this rpath madness surely lies!

Exhibit A: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit B: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Exhibit C: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you...

To save everyone a click here are the first two bullet points from Exhibit A:

* If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored!

* This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...]

Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :)


Yes you can do this, thanks for mentioning I was interested and checked how you would go about it.

1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary)

2. Replace libc.so with a fake library that has the right version symbol with a version script e.g. version.map GLIBC_2.29 { global: *; };

With an empty fake_libc.c `gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`

3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)

Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).


Ah you can use https://github.com/NixOS/patchelf/pull/564

So you can do e.g. `patchelf --remove-needed-version libm.so.6 GLIBC_2.29 ./mybinary` instead of replacing glibc wholesale (step 2 and 3) and assuming all of used glibc by the executable is ABI compatible this will just work (it's worked for a small binary for me, YMMV).


That's exactly what apgcc from Autopackage provided (20 years ago). https://github.com/DeaDBeeF-Player/apbuild

But compiling in a container is easier and also solves other problems.


We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.

Or just pre-install all the versions on each distro and pick the right one at load-time



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: