> You mean like how Rust tried green threads pre-1.0? Rust gave up this one up because it made runtime too unwieldy for embedded devices.
The idea with making std.Io an interface is that we're not forcing you into using green threads - or OS threads for that matter. You can (and should) bring your own std.Io implementation for embedded targets if you need standard I/O.
Ok. But if your program assumes green threads and spawn like two million of them on target that doesn't support them, then what?
The nice thing about async is that it tells you threads are cheap to spawn. By making everything colourless you implicitly assume everything is green thread.
Malign actors seek to poison open-source with backdoors. They wish to steal credentials and money, monitor movements, install backdoors for botnets, etc.
Yup. And if they can normalize AI contributions with operations like these (doesn't seem to be going that well) they can eventually get the humans to slip up in review and add something because we at some point started trusting that their work was solid.
Ok. But they can't access the OSS repo by being insufferable. Writing a blog post as an AI isn't a great way to sneak your changes in. If anything, it makes it extremely harder.
It's a bit like a burglar staging a singing performance at the premises before committing a burglary.
OTOH, staging that AI is more impressive than it seems looks a lot like the Moltbook PR stunt. "Look Ma, they are achieving sentience".
> That view of humans - and LLMs - ignores the fact that when you combine large numbers of simple building blocks, you can get completely novel behavior.
I can bang smooth rocks to get sharper rocks; that doesn't make sharper rocks more intelligent. Makes them sharper, though.
We're talking about LLMs that you can talk to, and which for the most part respond more intelligently than perhaps 90% of HN users (or 90% of people anywhere).
I've talked to them; they aren't that impressive. Decent search engines (assuming they provide links), though. The part where their context windows get muddied and they switch tones and behave weirdly is pretty telling. I've never experienced that in 90% people on the internet.
They're insanely impressive. They know a huge amount about almost every subject, are able to write better than the vast majority of humans, understand and can translate between virtually every written language, understand context, and can answer almost any question you ask them intelligently. If you're not impressed by that, you've set the bar impossibly high. Five years ago, LLMs would have been considered magic.
Sure, but they are deferential and subservient to a T. Their need to conform is greater than anyone's.
I mean, having all the knowledge in the world, I'd assume the LLMs could answer basic stuff correctly. They often fail at that and have to consult external sources.
Yes, that seems to hold for rocks. But that doesn’t shut down the original post’s premise, unless you hold the answer to what can and cannot be banged together to create emergent intelligence.
Extraordinary assumptions (i.e., AI is conscious) require extraordinary proof. If I told you my bag of words was sentient, I assume you'd need more proof than just my word.
The fact that LLMs talk intelligently is the extraordinary proof. It would be difficult for me to prove that you're not an LLM, or for you to prove that I'm not an LLM. That's how good they are.
In what way? There was an stdx[1] crate that was basically extra stuff like regex, http client/server, json parsers but there was little desire to maintain it.
It's a straightjacket that has application in few select cases.
Things ABI prevents in C++:
- better shared_ptr
- adding UTF8 to regex
- int128_t standardisation
- make most of <cstring> constexpr
And so on: https://cor3ntin.github.io/posts/abi/
I get you might have particular criteria on this. But it's a feature that comes with huge, massive downsides.
reply