For comparison, AmigaOS was built on the assumption of binary compatibility and people still replace and update libraries today, 35 years later.
It's a cultural issue, not a technical one - in the Amiga world breaking ABI compatibility is seen as a bug.
If you need to, you add a new function. If you really can't maintain backwards compatibility, you write a new library rather than break an old one.
As a result 35 year old binaries still occasionally get updates because the libraries they use are updated.
And using libraries as an extension point is well established (e.g datatypes to allow you to load new file formats, or xpk which let's any application that supports it handle any compression algorithm there's a library for).
Oh man that brings memories. It's so sad that things like datatypes or xpk didn't made it to modern OSes (well, there's just fraction of it, I guess video codecs are closest thing to it, but that just targets one area).
I also wanted to point out that this standardization made it possible to "pimp" your AmigaOS to make individual desktops somewhat unique. There were basically libraries that substituted system libraries and changed how the UI looked or even how it worked. I kind of miss that. Now the only personalization I see is how the terminal prompt looks like :)
It's a side effect of abstraction. Even a language like C makes it extremely hard to figure out the binary interfaces of the compiled program. There's no way to know for sure the effects any given change will create in the output.
The best binary interface I know is the Linux kernel system call interface. It is stable and clearly documented. It's so good I think compilers should add support for the calling convention. I wish every single library was like this.
We have an entire language-on-top-of-a-language in C++ pre-processor, but we could not figure out something to specify to the compiler what we want in an ABI?
I think an abstraction is when a tool takes care of something for you, this situation is just neglect.
I have maintained some mini projects which try to have strong API stability.
And even through keeping API stability is much easier then ABI stability I already ran into gotchas.
And that was simple stuff compared to what some of the more complex libraries do.
So I don't think ABI FFI stability ever had a good chance (outside of some very well maintained big system libraries where a lot of work is put into making it work).
I think the failure was to not realize it earlier and instead move to a more "message passing" + "error kernel" based approach for libraries where this is possible (which are surprisingly many) and use API stability only for the rest (except system libraries).
EDIT: Like use pipes to interconnect libraries and use well defined (but potential binary) message passing between them. Being able to reload libraries resetting all global state, or run multiple versions of them at the same time etc. But without question this isn't nice for small utility libraries or language frameworks and similar.
I think the failure was to not realize it earlier and instead move to a more "message passing" + "error kernel" based approach for libraries where this is possible (which are surprisingly many) and use API stability only for the rest (except system libraries).
Sounds pretty sweet as far as composability is concerned, but there is the overhead caused by serialization and the loss of passing parameters in registers.
I think the mess we created in ABI space is one of the failures of our indistry.