Dynamic libraries are a dumpster fire with how they are implemented right now, and I'd really prefer everything to be statically linked. But ideally, I'd like to see exploration of a hybrid solution, where library code is tagged inside a binary, so if the OS detects that multiple applications are using the same version of a library, it's not duplicated in RAM. Such a design would also allow for libraries to be updated if absolutely necessary, either by runtime or some kind of package manager.
OSes already typically look for duplicated code pages as opportunities to dedupe. It doesn’t need to be special cases for code pages because it’ll also find runtime heap duplicates that seem to be read only (eg your JS code JIT pages shared between sites).
One challenge will be that the likelihood of two random binaries having generated the same code pages for a given source library (even if pinned to the exact source) can be limited by linker and compiler options (eg dead code stripping, optimization setting differences, LTO, PGO etc).
The benefit of sharing libraries is generally limited unless you’re using a library that nearly every binary may end up linking which has decreased in probability as the software ecosystem has gotten more varied and complex.
I believe NixOS-like "build time binding" is the answer. Especially with Rust "if it compiles, it works". Software shares code in form of libraries, but any set of installed software built against some concrete version of lib which it depends on will use this concrete version forever (until update replaces it with new builds which are built against different concrete version of lib).
The system you’re proposing wouldn’t work, because without additional effort in the compiler and linker (which AFAIK doesn’t exist) there won’t be perfectly identical pages for the same static library linked into the same executable. And once you can update them independently, you have all the drawbacks of dynamic libraries again.
Outside of embedded, this kind of reuse is a very marginal memory savings for the overall system to begin with. The key benefit of dynamic libraries for a system with gigabytes of RAM is that you can update a common dependency (e.g. OpenSSL) without redownloading every binary on your system.
I wish the standard way of using shared libraries would be to ship the .so the programs want to dynamically link to alongside the program binary (using RUNPATH), instead of expecting them to exist globally (yes, I mean all shared libraries even glibc, first and foremost glibc, actually).
This way we'd have no portability issue, same benefit as with static linking except it works with glibc out of the box instead of requiring to use musl, and we could benefit from filesystem-level deduplication (with btrfs) to save disk space and memory.