I don't think anyone seriously believes AI will disappear without a trace. At the very least, LLMs will remain as the state of the art in high-level language processing (editing, translation, chat interfaces, etc.)
The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
Bubbles bursting aren't bad unless you were overinvested in the bubble. Consider that you'll be wiping your ass with DIMMs once this one bursts; I can always put more memory to good use.
> Bubbles bursting aren't bad unless you were overinvested in the bubble.
That's what I am trying to say: every big technology player, every industry, every government is all in on AI. That means you and I are along for the ride, whether we like it or not.
> Consider that you'll be wiping your ass with DIMMs once this one bursts; I can always put more memory to good use.
Except you can't, because DRAM makers have almost entirely pivoted from making (G)DDR chips to making HBM instead. HBM must be co-integrated at the interposer level and 3D-stacked, resulting in terrible yield. This makes it extremely pricy and impossible to package separately (no DIMMs).
So when I say the world is all in on this, I mean it. With every passing minute, there is less and less we can salvage once this is over; for consumer DRAM, it's already too late.
Games tend to avoid FP64 compute as Nvidia has always gimped it in consumer GPUs, so you are somewhat lucky there. "Lucky" as in, you get to enjoy the broken-ass, glitchy FP32 physics that we've all grown to love so much.
However, if you actually need the much higher precision of FP64 for scientific computing (like most non-AI data center users do) and extremely slow emulation is not an option, consider yourself fucked.
Well unfortunately for you, it has a precisely defined and well-understood meaning for all those not covering their eyes and ears in denial. Quoting Merriam-Webster:
> Digital content of low quality that is produced usually in quantity by means of artificial intelligence.
Chosen by the editors as word of the year, by the way.
"Slop" in English means "liquid junk, rubbish, tripe". No need to call for Merriam Webster's help.
The point is that AI can produce slop (as people do, too), but it's just silly to imply that everything it can produce is slop. That's just lazy, sloppy thinking.
Sure. I'm fully aware that AI can be useful, especially once we move past LLMs.
However, I do think that the majority (or mainstream) use of GenAI today is indeed not very useful or even harmful. And I do think that something like railroads are more useful by orders of magnitude.
Well, there are big differences in how aggressively things are patched. Arch Linux makes a point to strictly minimize patches and avoid them entirely whenever possible. That's a good thing, because otherwise, nonsense like the Xscreensaver situation ensues, where the original developers aggressively reject distro packages for mutilating their work and/or forcing old and buggy versions on unsuspecting users.
There seems to be a serious issue with Debian (and by extension, the tens of distros based on it) having no respect whatsoever for the developers of the software their OS is based on, which ends up hurting users the most. Not sure why they cannot just be respectful, but I am afraid they are shoveling Debian's grave, as people are abandoning stale and broken Debian-based distros in droves.
Needless to say, Zawinski was more than a little frustrated with how the Debian maintainers do things.
But honestly, this took 30 seconds to Google and was highly publicized at the time. This whole "I never heard of this, link??" approach to defend a lost argument when the point made is easily verifiable serves to do nothing but detract from discussion. Which, you know, is what this place is for.
I wasn't defending anything; searching for xscreensaver debian debacle yielded links that might or might not have been what you were referring to, They did not, however, yield a link to the JWZ site.
How about that time in 2020 the Swiss voted in favor of an immigration restriction proposal that was so fundamentally incompatible with existing EU treaties, the government was forced to bullshit their way out of implementing entirely because doing so would have basically ended Switzerland as a nation? This is the kinda thing that really cannot happen in a working system. The only reason the government is not sued into following through is because the courts have conspired with other branches to shut down any attempt at doing so. Real democratic.
Generally speaking, people are stupid. Really REALLY f-cking stupid. Giving the average Joe this kind of unmoderated power in a modern world that almost entirely eludes his understanding is no different from handing him a loaded gun; eventually, someone will get hurt real bad. As someone living in Switzerland, the main reason things are as stable as they are is because:
* Changing anything significant requires a referendum, which is a huge pain in the ass. So politicians just kinda avoid important changes that require referenda, finding other ways to enrich themselves and leaving society stagnating. This means that actually important changes come about very slowly or not at all. Read up on how long it took for women's suffrage to become universal – and the outright threats of internal military action the federal government resorted to...
* Whether the Swiss like it or not, Switzerland is mostly a loud, spoilt economic annex of the EU. It will remain stable for as long as the EU is, and well off for as long as the EU wants to be seen as a peaceful and magnanimous partner in international relations. After all, "bullying" tiny and surrounded Switzerland into agreeing to anything – which the Swiss will cry about at any opportunity you give them – is a bad look.
So yeah, Swiss direct democracy is not all it's made out to be, and really not all that great up close. Admirers remind me a lot of Weaboos, strangely shortsighted in their admiration of a system they know little about.
"Our" politicians? What makes you think I am American? Of course Switzerland is less dysfunctional than the US, that wasn't the point of discussion at all (and an extremely low bar to begin with). It was to show that direct democracy has lead to undesirable outcomes.
> Calling people too dumb to handle democracy sounds a tad facist.
Well, it's good thing I did not do that. I said the average person cannot begin to comprehend every facet of the modern world they live in. Your limited reading comprehension is not making a very compelling counterpoint here.
In my many years of living here, I have never seen a Swiss person treat their own illnesses, design their own trains, hunt their own meat, and do their own plumbing all at once – they delegate tasks they understand themselves to be incompetent in to specialists. Yet somehow, at the ballot box, their otherwise healthy and productive understanding of their own limited competence makes way for a strange form of celebratory group hubris, landing them in a constitutional crisis like a drunk driver in a ditch.
Maybe instead of trying make decisions they have no hope of truly understanding in an incredibly slow and inefficient process, they can just elect people they trust to be specialists in narrow fields to make these decisions for them for a fixed time span? Wow, exciting, we just fixed a flaw in democracy the Greeks knew about 3000 years ago by resorting to the same god damn system everyone else already uses: representative democracy. Which also sucks in its own, different ways, but at least it is much less likely to set the entire f-cking country on fire over night. That's kinda neat.
I am going to assume your question is genuine and not rethorical hyperbole.
Every sovereign nation has legal supremacy over its own territory. Any company doing business in the EU, no matter its origin, must follow EU laws inside the EU. However, these laws do not apply anywhere else (unless specified by some sort of treaty), so they are not forced to comply with them in the US when dealing with US customers.
If they still abide by EU law elsewhere, that is their choice, just like you can just choose to abide by Chinese law in the US — so long as it does not conflict with US law. If these rules do conflict with the first amendment, enforcing them in the US is simply not legal, and it's up to the company to figure out how to resolve this. In the worst case, they will have to give up business in the EU, or in this case, prohibit chat between US and EU customers, segregating their platform.
I mean this (mostly) as a joke, however, I kinda wish US businesses would just firewall off the EU at this point (yes, I know this would mean losing some customers/marketshare and thus would never happen).
But the near daily proposals getting tossed out in their desperate attempt to turn their countries into daycare centers is just annoying to people trying to build things for other adults.
> I kinda wish US businesses would just firewall off the EU at this point (yes, I know this would mean losing some customers/marketshare and thus would never happen).
This would involve them taking about a 30% hit to revenue (or more, depending on the company), so yeah, entirely implausible.
But, it's also worth noting that the US constantly does stuff like this. Like, the entire financial services panopticon of tracking is driven almost entirely by the US, and has been around since the 70s. Should the EU then wall off the US?
Personally, (as an EU citizen), that would really hurt if they did, but getting completely off the dollar based financial system would remove a lot of the US's control (and as a bonus/detriment reveal to the US how much of their vaunted market is propped up by EU money).
Most governments are bad, and these kinds of laws are international, so I'm not sure walling off the EU would make your life much better.
And let's be honest, you should expect the tech industry to end up as regulated as the financial industry over time, the only difference will be how long it takes to get there.
That's a downright insane comparison. The whole problem with generative AI is how extremely unreliable it is. You cannot really trust it with anything because irrespective of its average performance, it has absolutely zero guarantees on its worst-case behavior.
Aviation autopilot systems are the complete opposite. They are arguably the most reliable computer-based systems ever created. While they cannot fly a plane alone, pilots can trust them blindly to do specific, known tasks consistently well in over 99.99999% of cases, and provide clear diagnostics in case they cannot.
If gen AI agents were this consistently good at anything, this discussion would not be happening.
* Gen AI never disagrees with or objects to boss's ideas, even if they are bad or harmful to the company or others. In fact, it always praises them no matter what. Brenda, being a well-intentioned human being, might object to bad or immoral ideas to prevent harm. Since boss's ego is too fragile to accept criticism, he prefers gen AI.
* Boss is usually not qualified, willing, or free to do Brenda's job to the same quality standard as Brenda. This compels him to pay Brenda and treat her with basic decency, which is a nuisance. Gen AI does not demand fair or decent treatment and (at least for now) is cheaper than Brenda. It can work at any time and under conditions Brenda refuses to. So boss prefers gen AI.
* Brenda takes accountability for and pride in her work, making sure it is of high quality and as free of errors as she can manage. This is wasteful: boss only needs output that is good enough to make it someone else's problem, and as fast as possible. This is exactly what gen AI gives him, so boss prefers gen AI.
> Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.
And that's a problem. By typing out the code, your brain has time to process its implications and reflect on important implementation details, something you lose out on almost entirely when letting an LLM generate it.
Obviously, your high-level intentions and architectural planning are not tied to typing. However, I find that an entire class of nasty implementation bugs (memory and lifetime management, initialization, off-by-one errors, overflows, null handling, etc.) are easiest to spot and avoid right as you type them out. As a human capable of nonlinear cognition, I can catch many of these mid-typing and fix them immediately, saving an significant amount of time compared to if I did not. It doesn't help that LLMs are highly prone to generate these exact bugs, and no amount of agentic duct tape will make debugging these issues worthwhile.
The only two ways I see LLM code generation bring any value to you is if:
* Much of what you write is straight-up boilerplate. In this case, unless you are forced by your project or language to do this, you should stop. You are actively making the world a worse place.
* You simply want to complete your task and do not care about who else has to review, debug, or extend your code, and the massive costs in capital and human life quality your shitty code will incur downstream of you. In this case, you should also stop, as you are actively making the world a worse place.
So what about all these huge codebases you are expected to understand but you have not written? You can definitely understand code without writing it yourself.
> The only two ways I see LLM code generation bring any value to you is if
That is just an opinion.
I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.
> So what about all these huge codebases you are expected to understand but you have not written?
You do not need to fully understand large codebases to use them; this is what APIs are for. If you are adventurous, you might hunt a bug in some part of a large codebase, which usually leads you from the manifestation to the source of the bug on a fairly narrow path. None of this requires "understanding all these huge codebases". Your statement implies a significant lack of experience on your part, which makes your use of LLMs for code generation a bit alarming, to be honest.
The only people expected to truly understand huge codebases are those who maintain them. And that is exactly why AI PRs are so insulting: you are asking a maintainer to vet code you did not properly vet yourself. Because no, you do not understand the generated code as well as if you wrote it yourself. By PRing code you have a subpar understanding of, you come across as entitled and disrespectful, even with the best of intentions.
> That is just an opinion.
As opposed to yours? If you don't want to engage meaningfully with a comment, then there is no need to reply.
> I have projects I wrote with some help from the LLMs, and I understand ALL parts of it. In fact, it is written the way it is because I wanted it to be that way.
See, I could hit you with "That is just an opinion" here, especially as your statement is entirely anecdotal But I won't, because that would be lame and cowardly.
When you say "because I wanted it to be that way", what exactly does that mean? You told an extremely complex, probabilistic, and uninterpretable automaton what you want to write, and it wrote it not approximately, but exactly as you wanted it? I don't think this is possible from a mathematical point of view.
You further insist that you "understand ALL parts" of the output. This actually is possible, but seems way too time-inefficient to be plausible. It is very hard to exhaustively analyze all possible failure modes of code, whether you wrote it yourself or not. There is a reason why certifying safety-critical embedded code is hell, and why investigating isolated autopilot malfunctions in aircraft takes experts years. That is before we consider that those systems are carefully designed to be highly predictable, unlike an LLM.
Unfortunately, your reasoning has an enormous hole in it. A huge part of a product's quality is how it fares over time, i.e. how many years it lasts and how much it costs to maintain. Sadly, this either takes time or a realistic assessment to determine, both of which cannot part of a market bubble.
The 10$ shirt becomes a much shittier proposal once, in addition to its worse looks, fit, and comfort, you factor in its significantly lower durability and lifespan. That's why the 100$ shirt still exists after all. Nevermind that the example is a bad one to begin with because low-price commodities like T-shirts are never worth fixing when they break, but code with a paid maintainer clearly is.
In an market bubble like the one we find ourselves in, longevity is simply not relevant because the financial opportunity lies precisely in getting off in the train right before it crashes. For investors and managers, that is. Developers may be allowed to change cars, but they are stuck the train.
It's sad how some of the doomed are so desperate to avoid their fate that they fall prey to promises they know to be bullshit. The argument for Wish and TEMU products is exactly the same, yet we can all see it for what it is in those cases: a particularly short-lived lie.
Just addressing the comparison: a t-shirt can be changed for the cost of the new t-shirt. A software product costs not only the new one (being AI, very cheap) but also the cost of integrating it with processes and people - and this can kill you.
The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
reply