I would argue performance-noexcept-move-constructor should always be on. Move constructors should almost always be noexcept since they typically just move pointers around and don't do allocations normally.
The thing that drives me crazy is that it isn't even clear if AI is providing economic value yet (am I missing something there?). Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
Are people still in denial about the daily usage of AI?
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
>Because leaded gas is the same thing as people using a new technology like AI.
It's not the same, but it's not necessarily any good. I've observed the following, after ~2 weeks of free ChatGPT Plus access (as an artist who is trying to give the technology a chance, despite the vociferous (not vicious, geez) objections of many of my peers):
It's addictive (possibly on purpose). AI systems frequently return imperfect outputs. Users are trained to repeat until the desired output comes. Obviously, this can be abused by sophisticated-enough systems, pushing outputs that are JUST outside the user's desire so that they have to continue using it. This could conceivably happen independent of obvious incentives like ads or pay credits; even free systems are incentivized to use this dark pattern, as it keeps the user coming back, building a habit that can be monetized later.
Which leads into: it's gambling. It's a crapshoot whether the output will be what the user desires. As a result, every prompt is like a slot pull, exacerbated by the wait to generate an answer. (This is also why the generation is shown being typed/developed; the information in those preliminary outputs is not high-enough fidelity or presented in a readable way; instead, they're bits of visual stimuli meant to inure your reward system to the task, similar to how Robinhood's stock prices don't simply change second-to-second, but "roll" to them with a stimulating animation).
That's just a small subset of the possible effects on a user over time. Far from freeing users to create, my experience has been one of having to fight ChatGPT and its Images model, as well as the undesirable behaviors it seems to be trying to draw out of me.
I don't think there is anything that can be said to actually change people's minds here. Because people that are against it aren't interested in actually engaging with this new technology.
People that are interest in it and are using it on a daily basis see value in it. There are now hundreds of millions of active users that find a lot of value in using it.
The other factor here is the speed of adoption, which I think has seriously taken a lot of people by surprise. Especially those trying this wholesale boycot campaign of AI. For that reason people artificially boycotting this new technology are imo deluded.
If it were advocating for Open source models it would be far more reasonable.
>People that are interest in it and are using it on a daily basis see value in it.
I'm one of them. I've got plenty of image gens to prove it (and I'd have more if OpenAI hadn't killed Dall-E labs with almost no heads-up). I'm telling you that I still think contemporary implementations of the technology are just this side of vile, and that I hope that the industry collapses soon, so that grassroots start-ups with actual moral scruples, and a desire to enable rather than control their customers, have the chance to emerge and compete. Also: for said customers, such a collapse wouldn't even be THAT different from the way in which tech companies currently snatch away tools on a whim.
> Because people that are against it aren't interested in actually engaging with this new technology.
How do you know that? Are you just assuming anyone who has something negative to say just hasn't used it?
In my case it's absolutely not true. I've used it near daily for coding tasks and a handful of times for other random writing or research tasks. In a few cases I've actively encouraged a few others to try it.
From direct experience I can say it's definitely not ready for prime time. And I like the way most companies are trying to deploy it even less.
There is something there with LLMs, but the way they're being productized and commercialized does not seem healthy. I would rather see more research, slow testing and trials, and a clear understanding of the potential negatives for society before we simply dump it into the public sphere.
The only mind I see not willing to be changed is yours when you characterize any push back against AI as simply ignorant haters. You are clearly wrong about that.
>There is something there with LLMs, but the way they're being productized and commercialized does not seem healthy. I would rather see more research, slow testing and trials, and a clear understanding of the potential negatives for society before we simply dump it into the public sphere.
There's something incredibly harmful about this kind of mentality.
It's a weird kind of paternalizing, diminutive, and degrading view of common people.
> NFTs are still being used. Along with a lot of the crypto ecosystem. In fact we're increasingly finding legitimate use cases for it.
Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
Did you even click the link. It's a rant I would get banned for repeating it here. Actually even the title here says "nuclear".
So yes. Vicious.
Your problem is actually with my point, which you didn't address, not really, and instead resort to petty remarks that tries to discredit what's being said.
Yep. I hear that "vicious attack" phrase from plenty of people with narcissistic personality disorders in the tech industry in an attempt to try and shift the narrative. Its sick, really.
I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company. Ironically it's the very MIT report that "found AI to be a flop" (remember the "MIT study finds almost every AI initiative fails"), that also found that virtually every single worker is using AI (just not company AI, hence the flop part).
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
Firstly, it's not really good enough to say "our employees use it" and therefore it's providing us significant value as a business. It's also not good enough to say "our programmers now write 10x the number of lines of code and therefore that's providing us value" (lines of code have never been a good indicator of output). Significant value comes from new innovations.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
When I can replace a CAD license that costs $250/usr/mo with an applet written by gemini in an hour, that's a hard tangible gain.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
Someone correct me if I'm mistaken but don't CAD programs rely on a geometric modeling kernel? From what I understand this part is incredibly hard to get right and the best implementations are proprietary. No LLM is going to be able to get to that level anytime soon.
Sounds like GP is just in need for a G-Code to DXF converter when they mention "fringe stuff, cnc machine files from the 80's/90's" as answer to a sibling comment, though.
There are great FOSS CAD tools available nowadays (LibreCAD, FreeCAD, OpenSCAD etc.), especially for people who only need 2% of a feature set. But then again, I doubt that GP is really in need of a CAD software, or even writing one with the help of Gemini.
I agree, the applet which google plageurized through its Gemini tool saves you money. Why keep the middle man though? At this point, just pirate a copy.
I don't think it's plagiarized, nor would I pirate a copy. The workflow through the Gemini made app is way better (it's customized exactly for our inputs) and totally different than how the CAD program did it. So I wouldn't pirate a copy not even because our business runs above board, but also because the CAD version is actually also worse for our use. This is also pretty fringe stuff, cnc machine files from the 80's/90's.
Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.
Of course it's plagiarized. Perhaps not directly from the CAD software in question, but when you use an LLM, you are by definition plagiarizing by way of the data it was trained on.
You’re attacking one or two examples mentioned in their comment, when we could step back and see that in reality you’re pushing against the general scientific consensus. Which you’re free to do, but I suspect an ideological motivation behind it.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
I'm not saying LLMs are useless. But the value they have provided so far does not justify covering the country in datacenters and the scale of investment overall (not even close!).
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
Quite a large bubble. The burden of proof for demonstrating the enormous economic value LLMs are providing really is yours. Sure, there are anecdotal benefits to using LLMs, but we haven't seen any evidence that in aggregate businesses across America are benefitting. Other than AI companies, the stock market isn't even doing well. You would think that with massive expected efficiency gains companies would be doing better across the board. Are businesses that use AI generating significantly higher profits? I haven't seen any evidence of it yet (and I'm really looking for it, and would love to see it!). It's pure speculation so far.
Careful not to assume I’m more bullish than I am. I said there’s value, I didn’t say there’s enormous value equal to the investment bubble. I see this as similar to the dot com boom. Websites were and are valuable things, even if people got too excited in 2002 about it.
The scale and stakes of the investment now are much, much higher now than in dot com. Likewise, don't assume I'm more bearish than I am. But enormous investment requires more benefit than has been realized.
Uh, I must have missed the “consensus” here, especially when many studies are showing a productivity decrease from AI use. I think you’ve just conjured the idea of this “scientific consensus” out of thin air to deflect criticism.
It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
If it's so great and such a benefit: why scream it from to everyone? Why forced it? Why this crazy rhetoric labeling others at ideological? This makes no sense. If you found gold, just use it and get ahead of the curve. For some reason that never happens.
I kinda agree. We've been told for years it's a "massive productivity multiplier", and not just an iterative improvement.
So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.
If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.
LLMs are indeed currently an iterative improvement. I've found a few good use-cases for them. They're not nothing.
But at the moment, they are nowhere near the "massive productivity multiplier" they're advertised to be. Just as adding more lanes doesn't make traffic any better, perhaps they never will.
Or perhaps all the promises will come true -- and that, of course, is what is actually meant when the productivity gains are screamed from the rooftops. It was the same with computers, and it was the same with the internet: the proposed massive changes were going to come at some vague point in the future. Plenty of people saw those changes coming even decades in advance; reason from first principles and extrapolate the results of x scale and y investment and you couldn't not see where it was headed, at least generally.
The future potential is being sold in much the same way here. That'd be all fine and good except for the fact that the capex required to bring this potential future into being compared to any conceivable revenue model is so completely absurd that, even putting aside the disruptive-at-best nature of the technology, making up for the literal trillions of dollars of investment will have to twist our economic model to the point of breaking in order to make the math math. Add in the fact that this technology is tailor-made to not just disrupt or transform our jobs but to replace workers should this future potential arrive, and suddenly it looks nothing like computers in the 70s or networks in the 80s. It's not wonder not everyone is excited about it -- the dynamic is, at its very core, adversarial; its very existence states the quiet part of class warfare out loud.
Which brings us to so many people being forced to use it. I really, really hate this. Just as I don't want to be told which editor/IDE to use, I don't want to be told how to program. I deeply care about and understand my workflow quite well, thank you very much -- I've been diligently working on refining it for a good while now. And to state the obvious: if it were as good as they say it is, I'd be using it the way they want me to. I don't, because they just aren't that good (thankfully I have a choice in this matter -- for now). I also just don't like using them while programming, as I find them noisy and oddly extraverting, which tires me out. They are antithetical to flow. No one ever got into a flow state while pair programming, or managing a junior developer, and I doubt anyone ever got into a flow state while chatting with an LLM. It's just the wrong interface. The "better autocomplete" model is a better interface, but in practice I just haven't seen it do better than a good LSP or my own brain. At best it saves me a few key strokes, which I'd hardly call revolutionary. Again, not nothing, but far from the promise. We're still a very long way off.
To get there, LLM developers need cash, and they need data. Companies are forcing LLMs into every nook and cranny of so many employees' workflows so that they can provide training data, and bring that potential future one step closer to reality. The more we use LLMs, the more likely we are to being replaced. Simple as that.
I for one would welcome our new robot overlords if I had any faith that our society could navigate this disruption with grace and humanity. I'd be ecstatic and totally bullish on the tech if I felt it were ushering in a Star Trek-like future. But, ha, nope -- any faith I had in that sort of response died with how so many handled Covid, and especially when Trump was elected for a second time. These two events destroyed my estimation of humanity as a cooperative organism.
No, I now expect humanity at large -- or at least the USA -- to look at the stupidest, most short-sighted, meanest option possible and enthusiastically say "let's do that!" Which, coincidentally, is another way of describing what is currently happening with LLMs: the act of forcing mediocre tools down our throats while cynically exploiting our "language = intelligence" psychological blind-spot, raising utilities prices (how is a company's electric bill my problem again?), killing personal computing, accelerating climate change at the worst possible time, all in the name of destroying both my vocation and avocation.
I have never seen a counter-argument to this. Why its being forced on the world? Lets here some execs from these companies answer that. My bet is on silence every time. Microsoft is forcing AI chat applications into the OS and preventing people from removing it.
You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.
Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.
Are you a boss or a worker? That's the real divide, for the most part. Bosses love AI - when your job is just sending emails and attending remote meetings, letting LLM write emails for you and summarize meetings is a godsend. Now you can go from doing 4 hours of work a week to 0 hours! And they let you fantasize about finally killing off those annoying workers and replace them with robots that never stop working and never say no.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
Same here, I just limit my use of genAI to writing functions (and general brainstorming).
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.
Sure, but that honestly isn't the part which is getting trillions of imaginary dollars are being pumped into. Science AI is in the best of cases is getting the scraps I would say.
Yeah, comparing this with research investments into fusion power, I expect fusion power to yield far more benefit (although I could be wrong), and sooner.
You talk to an AI that goes incredibly slow and tries to get you to add extras to your order. I would say it has made the experience more annoying for me personally. Not a huge issue in the grand scheme of things but just another small step in the direction of making things worse. Although you could break the whole thing by ordering 18000 waters which is funny.
AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."
Andrej talked about this in a podcast with dwarkesh: the same is true for the internet. You will not find a massive spike when LLMs were released. It becomes embedded in the economy and you’ll see a gradual rise. Further, the kind of impact that the internet had took decades, the same will be true for LLMs.
You could argue that if I started marketing dog shit too though. The trick is only applying your argument to the things that will go on to be good. No one’s quite there yet. Probably just around the corner though.
It's the Red Queen hypothesis in action - AI is a relative and compounding capability with influence across broad sectors; the cost of losing out for the parties involved is severely more than the cost of over-investing. It's collective rational panic.
It’s definitely providing some value but it’s incredibly overvalued. Much like the dot com bust didn’t mean that online websites were bad or useless technology, only that people over invested into a bubble.
Are you waiting for things to get cheaper? Have you been around the last 20 years or so? Nothing gets cheaper for consumers in a capitalist society.
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
>You got people to accept a higher price, you're just gonna walk that back when problems go away?
The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.
The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.
Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.
On average yes, that’s why it’s a bad example. There are many excellent examples of things that can be used to show the massive cost of living issue, wage stagnation, etc. it’s just petrol isn’t a great one.
Everyone. That’s what “the price is lower” means. Don’t paint me as someone who doesn’t understand wage stagnation or cost of living crisis, I fully understand and am on board with those issues. My point is simply that petrol is a bad example the way OP used it.
Actually things have gotten massively cheaper under capitalism. Unfortunately at the same time, governments have been inflating the currency year over year and as the decline of prices slows down as innovation matures, inflation finally catches up and starts raising prices.
Reminder: Prices regularly drop in capitalist economies. Food used to be 25% of household spending. Clothing was also pretty high. More recently, electronics have dropped dramatically. TVs used to be big ticket items. I have unlimited cell data for $30 a month. My dad bought his first computer for around $3000 in 1982 dollars.
Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.
These have all fallen massively in price, too. Many billions more afford education than was possible before. Economies of scale have brought manufacturing costs for housing down, and now people live in larger, better structures than ever before.
Then you have the US, which artificially constrains the supply of new doctors, makes it illegal to open new hospitals without explicit government approval, massively subsidizes loans for education, causing waste, inefficiency, and skyrocketing prices in one specific market…
Zero incorporation of externalities. Food is less nutritious and raises healthcare costs. Clothing is less durable and has to be re-bought more often, and also sheds microplastics, which raises healthcare costs. Decent TVs are still big-ticket items, and you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs, and you HAVE to pay for internet (if not for content, often just to set up the device), AND everything you do on the device is sent to the manufacturer to sell (this is the actual subsidy driving down prices), which contributes to tech/social media engagement-driven, addiction-oriented, psychology-destroying panopticon, which... raises healthcare costs.
>Prices for LLM tokens has also dramatically dropped.
buzzer sound is an incredibly obnoxious way to start a comment and all you did after that is present yourself with exactly as much dignity as you deserve in return.
"Reminder" is just as patronizing and probably the cue I was responding to. I don't regret it, because on top of meeting his "obnoxious" framing with my own, the substance of my reply was also more correct. Your busy-body response was even less necessary and I hope that my refusal to take a conciliatory tone vexes you further. Have a nice day.
>When you see a device like this does the term 'sonic fidelity' come to mind?
Your straw man is funny, because yes, actually. Certainly when it was new. Vintage speakers are sought-after; well-maintained, and driven by modern sound processing, they sound great. Let alone that I was personally speaking of the types of sets that flat-panel TVs supplanted, the late 90s/early 2000s CRTs.
You are correct that the AI industry has produced no value for the economy, but the speculation on AI is the only thing keeping the U.S. economy from dropping into an economic cataclysm. The US economy has been dependent on the idea of infinite growth through innovation since 2008, and the tech industry is all out of innovation. So the only thing they can do is keep building datacenters and pray that an AGI somehow wakes up when they hit the magic number of GPUs. Then the elites can finally kill off all the proles like they've been itching to since the Communist Manifesto was first written.
LSM-trees do need a WAL. The entire idea of LSM-trees is that writes are buffered in memory and written out all at once. But a particular write doesn't wait for the memtable to be flushed. For that reason you still need a WAL (there is committed state in memory).
Those implementations use a WAL, but it seems to be only as a performance optimization to decrease the size of the in-memory index; is there a theoretical reason one is needed? It looks equivalent to a WAL-less write path combined with an almost immediate compaction. If you remove the compaction and don’t delete the WAL it seems like you can eliminate that write amplification (at least temporarily).
The original purpose of an LSM-tree is to take I/O off the critical path of a write (there are other reasons to use them though, for example reducing space amplification).
I would argue that by definition an LSM-tree buffers committed writes in memory, and that means you need a WAL for recovery.
If you are going to immediately flush the memtable then IO is on the critical path. And if you have fine grained updates you'll end up with lots of small files, which seems like a bad thing. It could be reasonable if you only receive batch updates.
Any durable commit is going to have I/O in the critical path unless you're Paxos/Raft replicating in-memory across failure domains (which we're not discussing here), but I think you mean it takes random I/O out of the critical path. You can get that without a WAL, though; just have the LSM keep appending out of order to a growing file and keep the in-memory index. That's the exact same I/O pattern that the WAL would generate, there just isn't an immediate compaction. The in-memory index will be stay fragmented for longer, though (which is why I call the WAL a performance optimization above). I suppose the WAL-less design lets you defer compaction for longer, which might be an advantage if you have lots of disk and lots of RAM, but don't want two-thirds of your throughput (read + write) taken away at critical moments.
> I would argue that by definition an LSM-tree buffers committed writes in memory, and that means you need a WAL for recovery.
This is true, but note that the WAL does not need to be in the database. You can use an event stream like Kafka and replay blocks of events in the event of a failure. ClickHouse has a feature to deduplicate blocks it has seen before, even if they land on a separate server in a cluster. You still need to store checksums of the previously seen blocks, which is what ClickHouse does. It does put the onus on users to regenerate blocks accurately but the overhead is far lower.