Isn't presenting answers to questions the same as publishing when it comes to ChatGPT? How many people must ChatGPT provide defamatory answers to before it becomes defamation?
Wait wait wait you’re saying the operator is accountable for their actions?
Just like ChatGPT was programmed to drill into a user any time it picks up on being misused? Reminding the user that they are ultimately responsible and liable for their actions, including how they use the output?
From how some make it sound, you would think ChatGPT was giving press conferences.
> oh no but what if it did?
Did it set itself up to do so? No? You found the liable humans.
ChatGPT is a text generator whose output is published on a publicly accessible website. That's a bit different than MS Word autocompleting something in a private document.
If someone created a website that randomly strung English words together paired with a grammar checker so it always, at least, produces actual sentences would that be liable for publishing incorrect facts?
If defamatory statements were made then yes. From [1]:
> There are four criteria used today in the United States:
> The statement was false, but was claimed as true.
> The statement must have been made to a third, previously uninvolved party.
> The statement must have been made by the accused party.
> The statement caused harm.
> Those who are not classified as public figures are considered private figures. To support a claim for defamation, in most states a private figure need only show negligence by the publisher, a much lower standard than "actual malice."
Laws differ in different states, within the US and without. As you can see here[2], the UK has similar rules to the US (which you might expect but I would say not to presume so easily) in that intent or malice, i.e. mens rea, are rarely part of the equation.
So yes, a machine spitting out what would be defamatory statements from anyone else's mouth are still defamation and would land the publisher in trouble if harm could be ascertained. I'm willing to hazard a bet that most people can see a difference between the kind of thing Google Docs produces when used as a word processor to the kind of thing ChatGPT produces.
> The statement was false, but was claimed as true
Who's claiming the statement is true? My website of random sentences that happens to write something about anything is merely coincidental. With ChatGPT it's less random but no less coincidental. It's clearly and obviously fallible and will give you whatever reasonable sounding answer it can.
Infinite monkeys on infinite typewriters will eventually defame everyone.
You would be free to make that argument in court but courts tend to be a bit more practical in how they approach a problem, probably because:
a) as others have pointed out, ChatGPT is making what a reasonable person would consider a truth claim
b) it’s not entirely random
c) even if it were random as in your example, you’d have a hard time explaining why it was only a few months and not something a bit more like infinity feels to the average person before the defamatory statements popped out. On the balance of probabilities, you failed to implement true randomness or anything like it, and you’d be liable.
ChatGPT output is only published in the sense it’s available over the internet to a specific user, just like a Google Doc, or word in Office 365. Any publication to an actually audience is the responsibility of the human directing that.
Google Docs is only publishing one thing to one user, unlike ChatGPT which may well publish the same thing to several users, and is giving output of an entirely different nature to a word processor. Spelling corrections are only answers when used in a spelling test.
Even if a text completion engine like GPT had any responsibility for truthfulness, which it doesn't, there's a disclaimer right there on the page you have to agree to in order to use it. Trying to pin blame on ChatGPT for defamation is like trying to sue Snapchat because its filter put cat ears on you when you in fact do not wear cat ears.
So you are okay with my new website that randomly makes false claims about you, as long as I have a disclaimer and don't actually understand how my software works?
Sure thing, go crazy. Nor do I care if you cast me as a villain in your D&D campaign or a racist pedophile in your novel. I don't care about made-up nonsense when there's a sign there that says it's made-up nonsense. All responsibility and liability is with the human being who repeats the nonsense as fact.