Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a non-lawyer: the creators of ChatGPT know that it'll say false things frequently.


Given how many critics say "GPT is just a better autocomplete", would autocomplete for "The most incompetent programmer in the world is…" result in legal action if any of the options was a name?


Defamation in the US, unlike other countries with much weaker free speech laws, has a very high bar to reach. To be liable for defaming a public figure from within the US you must publicize[1] material falsehoods[2] that you either knew were false or were negligently reckless about fact checking[3] leading to provable damages[4] stemming from the fact that people believed the falsehood[5].

Ignoring the GPT part of the problem all together, claiming someone to be the most incompetent programmer in the world would probably fail [2] for being understood as a statement of opinion rather than fact, possibly [3] if there was any basis for the claim whatsoever, likely [4] because the named individual would have to prove they were damaged somehow (maybe not if it led to them being denied employment or something), and [5] because the average reader would almost certainly understand it as hyperbole.

Reintroducing the GPT part, assuming the defendant is OpenAI for output of GPT, I would also argue a failure on point [1]. OpenAI doesn't release to the general public the output of the GPT program.


Google is pretty notorious for putting their finger on the output of the search autocomplete


If the output of ChatGPT is not copyrightable because it is not created by a human, then it should follow that a human cannot be held accountable for what it generates.


Someone will have to be found accountable. What about when we start hooking these systems up to machinery? If AirlinerGPT kills 200 people and an NFL team, people will absolutely not accept that there's no one responsible. There will be congressional subpoenas, sweeping new laws will be passed. It's best we start thinking ahead of time what we want those laws to be.


the people responsible will be the ones who hooked a plane up to chatGPT knowing that chatGPT can't be intrinsically trusted, not the makers of chatGPT itself

if a pilot gave the airliner controls to a dog, we wouldn't blame the dog or its parents for crashing the plane


Neither copyright nor copyrightability has anything whatsoever to do with any element of any cause of action thus it is wholly orthogonal to whether anyone has been wronged in any unrelated fashion.

Judges are liable to be old, not very hip to technology, and incredibly suspicious of any argument that would allow someone to opt out of traditional culpability by adding with an AI to the end of a description of traditional wrong doing.


Yeah but no, the implicature still allows copyright without culpability (edited: got that mixed up). The assumption seems to be that the lack of intent in creation, where it is unpredictable, would likewise imply lack of intent to harm. But that doesn't say much.


> If the output of ChatGPT is not copyrightable because it is not created by a human,

Isn't compiler output copyrightable?


> Isn't compiler output copyrightable?

Not by the compiler vendor.


Sure, but humans should absolutely be liable for what they do with the output they compel it to generate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: