Yeah, as soon as you don't need children to help with your work, they don't make much sense in the capitalist individualistic society. That women still choose to do it, honestly... I see as a triumph of the human spirit
Who knows? Maybe they are. I’m not from Chad myself (and sounds like you aren’t either), so we’re really not in a position to speculate on that. I do know that it’s quite common for one culture to have values or think in ways that are unintuitive to another culture.
Yeah, I grew up poor in the 3rd world (not quite Chad level though) and even the upper class culture of my own country was almost alien for us and vice versa... Imagine the 1st world.
Pretty sure the poor women in Chad with no access to healthcare and quality nutrition have quite a bit to lose, and they don't have a choice not to risk it.
Who do you think is their perceived neighborhood in time and space?
(edit) And moreover, they still need their children to help with their work... So honestly, any analysis that doesn't take this huge confounding variable is just silly
That's not how it works. Two bodies are in thermal equilibrium if there's no heat transfer between them: that's the zeroth law of thermodynamics. If you're colder than 2.73K in deep space, you will absorb the heat from the Cosmic Microwave Background. If you're hotter, you will irradiate heat away. So it does have a temperature.
Does this mean that if Earth stays a fixed distance from the sun then its equilibrium temperature is fixed? I remember people saying things like that the albedo of the ice caps affected the Earth's temperature.
I find it amazing how much the mess that building C/C++ code has been for so many decades seems to have influenced the direction technology, the economy and even politics has been going.
Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?
What I find amazing is why people continously claim glibc is the problem here. I have a commercial software binary from 1996 that _still works_ to this day. It even links with X11, and works under Xwayland.
The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.
We (small HPC system) just upgraded our OS from RHEL 7 to RHEL 9. Most user apps are dynamically linked, too.
You don't want to believe how many old binaries broke. Lot of ABI upgrades like libpng, ncurses, heck even stuff like readline and libtiff all changed just enough for linker errors to occur.
Ironically all the statically compiled stuff was fine. Some small things like you mention only linking to glibc and X11 was fine too. Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.
But yeah, now that I'm writing this out, glibc was never the problem in terms of forwards compatibility. Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
> Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.
Why "better than expected"? I can run the entire userspace from Debian Etch on a kernel built two days ago... some kernel settings need to be changed (because of the old glibc! but it's not glibc's fault: it's the kernel who broke things), but it works.
> Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
But this is a different problem, and no one makes promises here (not the kernel, not musl). So all the talk of statically linking with musl to get such type of compatibility is bullshit (at some point, you're going to hit a syscall/instruction/whatever that the newer musl does that the older kernel/hardware does not support).
Better than expected as it's mixing userlands. We didn't put the entire /usr/lib of the old system in LD_LIBRARY_PATH but just some stuff like old libpng, libjpeg and the shebang. Taking an image of an old compute node still on RHEL 7 and then dumping it a container naturally worked, but at that point it's only the kernel interface you have to worry about, not different glibc, gtk, qt and that kind of stuff.
I remember this in a heated LKML exchange, 13 years ago, look how the table has turned:
>
> Are you saying that pulseaudio is entering on some weird loop if the
> returned value is not -EINVAL? That seems a bug at pulseaudio.
Mauro, SHUT THE FUCK UP!
It's a bug alright - in the kernel. How long have you been a
maintainer? And you still haven't learnt the first rule of kernel
maintenance?
If a change results in user programs breaking, it's a bug in the
kernel. We never EVER blame the user programs. How hard can this be to
understand?
To make matters worse, commit f0ed2ce840b3 is clearly total and utter
CRAP even if it didn't break applications. ENOENT is not a valid error
return from an ioctl. Never has been, never will be. ENOENT means "No
such file and directory", and is for path operations. ioctl's are done
on files that have already been opened, there's no way in hell that
ENOENT would ever be valid.
> So, on a first glance, this doesn't sound like a regression,
> but, instead, it looks tha pulseaudio/tumbleweed has some serious
> bugs and/or regressions.
Shut up, Mauro. And I don't _ever_ want to hear that kind of obvious
garbage and idiocy from a kernel maintainer again. Seriously.
I'd wait for Rafael's patch to go through you, but I have another
error report in my mailbox of all KDE media applications being broken
by v3.8-rc1, and I bet it's the same kernel bug. And you've shown
yourself to not be competent in this issue, so I'll apply it directly
and immediately myself.
WE DO NOT BREAK USERSPACE!
Seriously. How hard is this rule to understand? We particularly don't
break user space with TOTAL CRAP. I'm angry, because your whole email
was so _horribly_ wrong, and the patch that broke things was so
obviously crap. The whole patch is incredibly broken shit. It adds an
insane error code (ENOENT), and then because it's so insane, it adds a
few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").
The fact that you then try to make excuses for breaking user space,
and blaming some external program that used to work, is just
shameful. It's not how we work.
Fix your f*cking "compliance tool", because it is obviously broken.
And fix your approach to kernel programming.
The problem of modern libc (newer than ~2004, I have no idea what that 1996 one is doing) isn't that old software stops working. It's that you can't compile software on your up to date desktop and have it run on your "security updates only" server. Or your clients "couple of years out of date" computers.
And that doesn't require using newer functionality.
But this is not "backwards compatibility". No one promises this type of "forward compatibility" that you are asking for . Even win32 only does it exceptionally... maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.
And this has nothing to do with 1996, or 2004 glibc at all.
In fact, glibc makes this otherwise impossible task actually possible: you can force to link with older symbols, but that solves only a fraction of the problem of what you're trying to achieve. Statically linking / musl does not solve this either. At some point musl is going to use a newer syscall, or any other newer feature, and you're broke again.
Also, what is so hard about building your software in your "security updates only" server? Or a chroot of it at least ? As I was saying below, I have a Debian 2006-ish chroot for this purpose....
> maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.
In my experience, that's not quite accurate. I'm working on a GUI program that targets Windows NT 4.0, built using a Win11 toolchain. With a few tweaks here and there, it works flawlessly. Microsoft goes to great lengths to keep system DLLs and the CRT forward- and backward-compatible. It's even possible to get libc++ working: https://building.enlyze.com/posts/targeting-25-years-of-wind...
What does "a Win11 toolchain" mean here? In the article you link, the guy is filling missing functions, rewriting the runtime, and overall doing even more work than what I need to do to build binaries on a Linux system from 2026 that would work on a Linux from the 90s : a simple chroot. Even building gcc is a walk in the park compared to reimplementing OS threading functions...
Windows dlls are forward compatible in that sense. If you use the Linux kernel directly, it is forward compatible in that sense. And, of course, there is no issue at all with statically linked code.
The problem is with the Linux dynamic linking, and the idea that you must not statically link the glibc code. And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
> Windows dlls are forward compatible in that sense.
If you want to go to such level, ELF is also forward compatible in that sense.
This is completely irrelevant, because what the developer is going to see is the binaries he builts in XP SP3 no longer work in XP SP2 because of a link error: the _statically linked_ runtime is going to call symbols that are not in XP SP2 DLLs (e.g. the DecodePointer debacle).
> If you use the Linux kernel directly, it is forward compatible in that sense.
Or not, because there will be a note in the ELF headers with the minimum kernel version required, which is going to be set to a recent version even if you do not use any newer feature. (unless you play with the toolchain) (PE has similar field too, leading to the "not a valid win32 executable" messages).
> And, of course, there is no issue at all with statically linked code.
I would say statically linked code is precisely the root of all these problems.
In addition to bring more problems of its own. E.g. games that dynamically link with SDL can be patched to have any other SDL version, including one with bugfixes for X support, audio, etc. Games that statically link with SDL? Sorry..
> And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
Funnily, I think that is exactly the same as the solution I'm proposing for this conundrum: just (dynamically) link with the older glibc ! Voila: your binary now works with glibc from 1996 and glibc from 2026.
Frankly, glibc is already the project with the best binary compatibility of the entire Linux desktop , if not the only one with a binary compatibility story at all . The kernel is _not_ better in this regard (e.g. /dev/dsp).
If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.
I know, I've done that.
> just (dynamically) link with the older glibc!
Except that the older glibc is unmaintained and very hard to get a hold of and use. If you solve that, yeah, it's the same.
> If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.
No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.
If you use VC++6 in 7, then yes, you can; but that's not really that different from me using a Debian Etch chroot to build.
Even within XP era this happens, since there are VS versions that target XP _SP2_ and produce binaries that are not compatible with XP _SP1_. That's the "DecodePointer" debacle I was mentioning. _Even_ if you do not use any "SP2" feature (as few as they are), the runtime (the statically linked part; not MSVCRT) is going to call DecodePointer, so even the smallest hello world will catastrophically fail in older win32 version.
Just Google around for hundreds of confused developers.
> Except that the older glibc is unmaintained and very hard to get a hold of and use.
"unmaintained" is another way of saying "frozen" or "security updates only" I guess. But ... hard to get a hold of ? You are literally running it on your the "security updates only" server that you wanted to target in the first place!
"Without following any special procedure". I know you can install older toolchains and then build using those, but I can do as much on any platform (e.g. by using a chroot). The default on VS2012 is Vista-only binaries.
> The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
How would that work given that glibc has gone through a soname change since then? If it's from 1996 are you sure the secret isn't that it uses non-g libc?
It suggests someone went into the details of how it was linked and was careful about what it was and wasn't linked to, and perhaps even intervened directly in the low-level parts of the linking process.
Or rather it simply suggests you build for two versions of the major distributions of then, or the two distributions even...
Why is my entire argument so hard to understand? To build for a different glibc you do not have to do _any_ type of arcane magic or whatever you claim. You just build in a different system... or chroot... I have been doing that _myself_ for at least 15 years, and I know of other Linux desktop commercial shops that have been doing it for much, much longer. Chroots are _trivial_.
Well, you could just look at things from an interoperability and standards viewpoint.
Lots of tech companies and organizations have created artificial barriers to entry.
For example, most people own a computer (their phone) that they cannot control. It will play media under the control of other organizations.
The whole top-to-bottom infrastructure of DRM was put into place by hollywood, and then is used by every other program to control/restrict what people do.
I guess we'd have to see the graph with the evolution of paying customers: I don't see the number of potential-but-not-yet clients being that high, certainly not one order of magnitude higher. And everyone already knows OpenAI, they don't have the benefit of additional exposure when they go viral: the only benefit seems to be to hype up investors.
And there's something else about the diminishing returns of going viral... AI kind of breaks the usual assumptions in software: that building it is the hard part and that scaling is basically free. In that sense, AI looks more like regular commodities or physical products, in that you can't just Ctrl-C/Ctrl-V: resources are O(N) on the number of users, not O(log N) like regular software.
Yeah, for instance: even if Trump's bullying works for now, he made sure that most governments in Latin America, including right wing ones, will prioritize uncoupling the country from the US economy. Even if they won't say this quiet part out loud.
reply