Hacker Newsnew | past | comments | ask | show | jobs | submit | Ygg2's commentslogin

Yeah but ABI stability isn't really just magic dust you sprinkle in you language/compiler and make it more stable

It's a straightjacket that has application in few select cases.

Things ABI prevents in C++:

- better shared_ptr

- adding UTF8 to regex

- int128_t standardisation

- make most of <cstring> constexpr

And so on: https://cor3ntin.github.io/posts/abi/

I get you might have particular criteria on this. But it's a feature that comes with huge, massive downsides.


Tying yourself in a knot around ABI usually isn't worth it. You pick up to two: performance, ABI stability or adaptability.

And you can still internaly have it, if your deps have sources, or compile artifacts for only allow single Rust version (additional rules may apply).

There is work on Rust ABI (crabi), but there isn't a huge push for it.


Any backwards compatible language will accumulate hindsight errors. It's practically inevitable.

> Instead of debating for years (like other languages), zig just tries things out.

So did Rust pre-1.0

Stability guarantees are a pain in the neck. You can't just break other people's code willy nilly.

> This makes zig unique. It's fun to use and it stays fresh.

You mean like how Rust tried green threads pre-1.0? Rust gave up this one up because it made runtime too unwieldy for embedded devices.


Just on this point:

> You mean like how Rust tried green threads pre-1.0? Rust gave up this one up because it made runtime too unwieldy for embedded devices.

The idea with making std.Io an interface is that we're not forcing you into using green threads - or OS threads for that matter. You can (and should) bring your own std.Io implementation for embedded targets if you need standard I/O.


Ok. But if your program assumes green threads and spawn like two million of them on target that doesn't support them, then what?

The nice thing about async is that it tells you threads are cheap to spawn. By making everything colourless you implicitly assume everything is green thread.


They didn't. Biggest Rust GUI by popularity is Dioxus.

Ok. But why would someone do this? I hate to sound conspiratorial but an AI company aligned actor makes more sense.

Malign actors seek to poison open-source with backdoors. They wish to steal credentials and money, monitor movements, install backdoors for botnets, etc.

Yup. And if they can normalize AI contributions with operations like these (doesn't seem to be going that well) they can eventually get the humans to slip up in review and add something because we at some point started trusting that their work was solid.

Ok. But they can't access the OSS repo by being insufferable. Writing a blog post as an AI isn't a great way to sneak your changes in. If anything, it makes it extremely harder.

It's a bit like a burglar staging a singing performance at the premises before committing a burglary.

OTOH, staging that AI is more impressive than it seems looks a lot like the Moltbook PR stunt. "Look Ma, they are achieving sentience".


> That view of humans - and LLMs - ignores the fact that when you combine large numbers of simple building blocks, you can get completely novel behavior.

I can bang smooth rocks to get sharper rocks; that doesn't make sharper rocks more intelligent. Makes them sharper, though.

Which is to say, novel behavior != intelligence.


We're not talking about sharp rocks.

We're talking about LLMs that you can talk to, and which for the most part respond more intelligently than perhaps 90% of HN users (or 90% of people anywhere).


I've talked to them; they aren't that impressive. Decent search engines (assuming they provide links), though. The part where their context windows get muddied and they switch tones and behave weirdly is pretty telling. I've never experienced that in 90% people on the internet.

They're insanely impressive. They know a huge amount about almost every subject, are able to write better than the vast majority of humans, understand and can translate between virtually every written language, understand context, and can answer almost any question you ask them intelligently. If you're not impressed by that, you've set the bar impossibly high. Five years ago, LLMs would have been considered magic.

Sure, but they are deferential and subservient to a T. Their need to conform is greater than anyone's.

I mean, having all the knowledge in the world, I'd assume the LLMs could answer basic stuff correctly. They often fail at that and have to consult external sources.


Yes, that seems to hold for rocks. But that doesn’t shut down the original post’s premise, unless you hold the answer to what can and cannot be banged together to create emergent intelligence.

Extraordinary assumptions (i.e., AI is conscious) require extraordinary proof. If I told you my bag of words was sentient, I assume you'd need more proof than just my word.

The fact that LLMs talk intelligently is the extraordinary proof. It would be difficult for me to prove that you're not an LLM, or for you to prove that I'm not an LLM. That's how good they are.

Talking intelligently isn't a high bar. Eliza could talk intelligently. It's necessary but insufficient.

> I can't imagine why someone would want to openly advertise that they're so closed minded.

Because humans often anthropomorphize completely inert things? E.g. a coffee machine or a bomb disposal robot.

So far whatever behavior LLMs have shown is basically fueled by Sci-Fi stories of how a robot should behave under such and such.


In what way? There was an stdx[1] crate that was basically extra stuff like regex, http client/server, json parsers but there was little desire to maintain it.

[1] https://github.com/brson/stdx?tab=readme-ov-file#stdx---the-...


Pretty sure those poor multi billion companies also got huge subsidies.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: