Hacker Newsnew | past | comments | ask | show | jobs | submit | zapperdulchen's commentslogin

That is your POV. I fear that democracy erodes when there's insults, belittling, ... instead of exchange of arguments and the contest of ideas. Because at some point insults turn into ugly actions. Whether it's Charlie Kirk or Melissa Hortman.

There is a reason the founding fathers put freedom of speech as the first amendment

Insults should absolutely be protected speech.

In countries that make insulting politicians illegal, all a politician has to do to become a dictator is say that speech criticizing them/their behavior is insulting and therefore illegal

Would you like if Trump arrested anybody who insulted him?


If you speak French to Mistral, it gets it right everytime: Je veux laver ma voiture. La station de lavage est à 50 mètres. J'y vais à pied ou en voiture ?

I've been gone from France too long. I've never heard "station de lavage" before.

Very awkward and formal. Anyone would call it lavage auto, lave-auto or simply lavage if the context is clear.

Maybe I'm too old or my family was weird. We called it "le carwash" with a beautifully French "carouache" pronunciation. But yeah, "lave-auto" sounds more familiar.

Honestly, If anyone asked me "T'as fait quoi?" I'd blurt out "J'ai amené ma voiture chez le lavage". Background: I stopped speaking french when I was ten and my family isn't native, but it feels more conversational than "station de lavage".

Great observation. Seems like we're back to prompt abracadabra.

My little experiment gave me:

No added hint 0/3

hint added at the end 1.5/3

hint added at the beginning 3/3

.5 because it stated "Walk" and then convinced it self that "Drive" is the better answer.


If you change the order of the sentences, Sonnet gets it right 3/3: The car wash is 50 meters away. I want to wash my car. Should I walk or drive?

That trick didn't help Mistral Le Chat.


I don't think the trick can be generalized though. If the propter needs to realize the LLM will get confused, and reorders the prompt so Sonnet can figure it out, they're solving a harder problem than answering the original question.

Sure, manually written API docs are a thing of the past. But this has been true even before the era of LLMs. But I'm not that sure that this argument stands for all kinds of software. Depending on the abstraction between your source code and the things your users want to achieve, the expert view of a technical communicator might be necessary in order to come up with instructions (how-to) that meet the needs of the person seeking help instead of just summarizing the software code in natural language.


History serves you a similar experiment on a much larger scale. More than 35 years after the reunification sociologists still make out mentality differences between former East and West Germans.


> Also, one might argue that universe/laws of physics are computational.

Maybe we need to define "computational" before moving on. To me this echoes the clockwork universe of the Enligthenment. Insights of quantum physics have shattered this idea.


> Insights of quantum physics have shattered this idea.

Not at all. Quantum mechanics is fully deterministic, if you stay away from bonkers interpretations like Copenhagen.

And, of course, you can simulate random processes just fine even on a deterministic system use a pseudo random number generator or you can just connect a physical hardware random number generator to your otherwise deterministic system. Compared to all the hardware used in our LLMs so far, random number cards are cheap kit.

Though I doubt a hardware random number generator will make the difference between dumb and intelligent systems: pseudo random number generators are just too good, and generalising a bit you'd need P=NP to be true for your system to behave differently with a good PRNG vs real random numbers.


You can simulate a nondeterministic process. There's just no way to consistently get a matching outcome. It's no different than running the process itself multiple times and getting different outputs for the same inputs.


That reaction is very different from the Marathon crater one though it uses the same pattern. I think OP's reasoning that there is a naive commitment bias doesn't hold. But to see almost all LLMs to fall into the ambiguity trap, is important for any real world use.


I am also trying to understand the shady limits of LLMs. But your example doesn't give incorrect answers in ChatGPT 4o, Sonnet 3.5 nor Deepseek V2.


To me this is a simplification. The CEO applauded a hiring decision. His social media team amplified it.

Here’s the tweet: https://x.com/andyyen/status/1864436449942110660

Some have interpreted this as a political signal. The Intercept article provides additional context and takes a more critical view: https://theintercept.com/2025/01/28/proton-mail-andy-yen-tru...


> Diverse democracies never succeed

May be I get this wrong. This sounds like a bold statement. Switzerland, Canada, Belgium but also India are multi-lingual or even multi-ethnic democracies. They know tensions but they are not failed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: