I agree 100% with Linus. I can run a WinXP exe on Win10 or 11 almost every time, but on Linux I often have to chase down versions that still work with the latest Mint or Ubuntu distros. Stuff that worked before just breaks, especially if the app isn’t in the repo.
Yes and even the package format thing is a hell of its own. Even on Ubuntu you have multiple package formats and sometimes there are even multiple app stores (a Gnome one and an Ubuntu specific if I remember correctly)
Ultimately this boils down to lack of clear technical and community leadership from Canonical. Too unwilling to say "no" to vanity/pet projects that end up costing all of us as they make the resulting distribution into a moving target too difficult to support in the enterprise - at least, not with the skillset of the average desktop support hire these days.
I want to go to the alternate timeline where they just stuck with a set of technologies... ideally KDE... and just matured them up until they were the idealized version of their original plan instead of always throwing things away to rewrite them for ideological or technical purity of design.
You can also run a WinXP exe on any Linux distribution almost every time. That's the point of project and Linus' quip: The only stable ABI around on MS Windows and Linux is Win32 (BTW, I do not agree with this.)
I think it's not unlikely that we reach reach a point in a couple of decades where we are all developing win32 apps that most people are running some form of linux.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
So every Linux distribution should compile and distribute packages for every single piece of open source software in existence, both the very newest stuff that was only released last week, and also everything from 30+ years ago, no matter how obscure.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
And have an ever decreasing market share, in desktop, hypervisor and server space. The API/ABI stability is probably the only thing stemming the customer leakage at all. It's not the be all and end all.
Those users will either check the source code and compile it themself, with all the proper options to match their system; or rely on a software distribution to do it for them.
People who are complaining would prefer a world of isolated apps downloaded from signed stores, but Linux was born at an optimistic time when the goal was software that cooperate and form a system, and which distribution does not depend on a central trusted platform.
I do not believe that there is any real technical issue discussed here, just drastically different goals.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
Yep. But docker doesn’t help you with desktop apps. And everything becomes so big!
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
What do you mean docker doesn’t help you with desktop apps? I run complicated desktop apps like Firefox inside containers all the time. There are also apps like Citrix Workspace that need so specific dependency versions that I’ve given up on running outside containers.
If you don’t want to configure this manually, use distrobox, which is a nice shell script wrapper that helps you set things up so graphical desktop apps just work.
And being 100 things is completely unavoidable when freedom is involved. You can force everyone to use the same 1 thing, if you make it proprietary. If people have freedom to customize it, of course another 99 people will come along and do that. We should probably just accept this is the price of freedom. It's not as bad as it seems because you also have the freedom to make your system present like some other system in order to run programs made for that system.
Your tone makes it sound like this is a bad thing. But from a user’s perspective, I do want a distro to package as much software as possible. And it has nothing to do with user freedom. It’s all about being entitled as a user to have the world’s software conveniently packaged.
What if you want to use a newer or older version of just one package without having to update or downgrade the entire goddamn universe? What if you need to use proprietary software?
I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with.
That is the point of flatpak or appimage but even before that you could do it by shipping the libraries with your software and use LD_LIBRARY_PATH to link your software to them.
That was what most well packaged proprietary software used to do when installing into /opt.
Software installed from your package manager is almost certainly provided as a binary already. You could package a .exe file and that should work everywhere WINE is installed.
That's not my point. My point is that if executable A depends on library B, and library B does not provide any stable ABI, then the package manager will take care of updating A whenever updating B. Windows has fanatical commitment to ABI stability, so the situation above does not even occur. As a user, all the hard work dealing with ABI breakages on Linux are done by the people managing the software repos, not by the user or by the developer. I'm personally very appreciative of this fact.
Sure, it's better than nothing, but it's certainly not ideal. How much time and energy is being wasted by libraries like that? Wouldn't it be better if library B had a stable ABI or was versioned? Is there any reason it needs to work like this?
And you can also argue how much time and energy is being wasted by committing to a stable ABI such that the library cannot meaningfully improve. Remember that even struct sizes are part of the ABI; so you either cannot add new fields to a struct, or you expose pointers only and have to resort to dynamic allocation rather than stack allocation most of the time.
Opinions could differ but personally I think a stable ABI wastes more time and energy than an unstable ABI because it forces code to be inefficient. Code is run more often than they are compiled. It’s better to allow code to run faster than to avoid extra compilations.
Actually the solution to this on Windows is for programs to package all their dependencies except for Windows. When you install a game, that game includes a copy of every library the game uses, except for Windows
Even open-source software has to deal with the moving target that is ABI and API compatibility on Linux. OpenSSL’s API versioning is a nightmare, for example, and it’s the most critical piece of software to dynamically link (and almost everything needs a crypto/SSL library).
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
The reason you're getting downvoted is that what you're saying implies a shit-ton of work for the distros -- that's expensive work that someone has to pay for (but nobody wants to, and think of the cost of opportunity).
But you're not entirely wrong -- as long as you have API compatibility then it's just a rebuild, right? Well, no, because something always breaks and requires attention. The fact is that in the world of open source the devs/maintainers can't be as disciplined about API compat as you want them to be, and sometimes they have to break backwards compatibility for reasons (security, or just too much tech debt and maint load for obsolete APIs). Because every upstream evolves at a different rate, keeping a distro updated is just hard.
I'm not saying that statically linking things and continuing to run the binaries for decades is a good answer though. I'm merely explaining why I think your comment got downvoted.