Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

does there have to be intent for libel?

I doubt it



If the subject of the libel is a public figure then you must show that the defendant acted with actual malice - that is you must show that the defendant knew the information was false, or acted with reckless disregard for whether the information was false despite knowing it would cause harm.

If the subject is not a public figure then it isn't necessary to demonstrate intent.


> acted with reckless disregard for whether the information was false despite knowing it would cause harm.

That does seem like something that can be proved - if you release a model that can 1) is prone to hallucination 2) won't reject a priori discussing topics that are prone to producing libelous text, but may reject other sensitive topics 3) produce text that sounds convincing even when hallucinating, could that be considered reckless disregard for the possibility of creating/sharing false information?

See also https://reason.com/volokh/2023/03/24/large-libel-models-an-a...


No. If it insults anyone equally, the only imaginable motivation of its creator would be satire, that ought to be free speech protected.

If it's trained to insult a particular group of people, then the input must be curated and the warranty waver ("erroneous") would be a straight up lie unless it is just bad training data which doesn't recognize for example dark skin as human, or does recognize abortion as good practice, in which case it exceedingly unlikely that it could be directed at public figures. It's not too difficult to imagine that it would start using a euphemism of the n-word eventually, but then, eh, how did your name end up in the training data if it isn't public?


Accusing someone of sexual assault is not "insulting" and definitely not satire even if anyone is just as likely to be accused.

Having a public and unique name is something you can't really avoid if you have any sort of public presence, not having it associated to serious felonies is something you can control. It is not something that a desirable AI should hallucinate about and something for which a disclaimer is enough.

In other words, ChatGPT had better learn when it should keep its mouth shut.


It is infuriating how you substitute "ChatGPT" for the entirely hypothetical slander machine of your own devise.

It would absolutelt have humoristic value, appeal to emotion notwithstanding.


I love how people are defending OpenAI blindly not wondering how it would feel if it was them who was being accused of sexual assault.


Honestly, if it hallucinated a story every time someone asked about <some shady activity> and hallucinated that you personally see involved or responsible for suck activity, you’d want it fixed too.


Strong disagreement.

OpenAI is well aware of the potential for hallucinations. They have made a good faith attempt to minimize them. They let users know that it is a real problem. Despite being unable to solve this hard problem, they did not show reckless disregard for the fact that it exists.

A user who doesn't take those warnings seriously may well have shown reckless disregard. But OpenAI should be fine.


Simply knowing that your models constantly mix hallucinations with fact could be trivially construed as reckless disregard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: