Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure about this. Generally, a diffamatory statement must have been made with knowledge that it was untrue or with reckless disregard for the truth. It's going to be hard to argue this is the case here. Is Google also on the hook for diffamatory statements that can potentially show up in search result snippets?


> Is Google responsible for diffamatory statements that can potentially show up in search result snippets?

Why do people like you do this?

Regulations can make Google responsible for plenty of things that show up in their results. Often enough there are search results removed because of DMCA claims (or because of German laws) which is explicitly stated at the bottom of the results. Google is a corporation that is subject to laws like any other. They're not special. If a government decides that they need to deal with a certain kind of content, then they will. This doesn't necessarily mean punishing them the moment something "illegal" shows up, but it does mean that when something is flagged, they have a responsibility to handle it according to the law.


> Why do people like you do this?

People like myself pose questions when we are unsure about a topic, hoping that someone with more expertise can provide a well-informed answer.

The DMCA was specifically implemented to address situations where it would be unreasonable to hold entities like Google responsible for copyright infringements when copyrighted materials appear in search results. My inquiry was aimed at determining if a similar framework exists to protect Google (and potentially ChatGPT) from liability in cases involving defamatory statements.

While my personal inclination is that it would be unreasonable to hold Google liable for defamatory statements appearing in search results, I genuinely don't know what the law actually has to say about it.


I think it's a pretty different case from Google results. Google has definitely been sued and tried in court many times for their search results, but generally has not been found responsible for indexing results as they are not acting as the "publisher or speaker" behind that content. Google can be held responsible for damages if they are the original creator of the damaging content and not a third party source.

GPT on the other hand may be acting more directly as the "publisher or speaker" when writing or responding to chats. They aren't able to provide a link to an external content provider used in their response (or provides a completely fictional source), and sometimes may be synthesizing or hallucinating entirely new information that doesn't exist anywhere else on the web.

OpenAI has some disclaimer text hoping to avoid being held responsible for this type of issue, such as this small print at the bottom of all ChatGPT conversations: "ChatGPT may produce inaccurate information about people, places, or facts" (and likely further language in their TOS etc). But it's a bit of a sticky issue. If many people are found to be using ChatGPT and trusting results as accurate, its plausible OpenAI might be found to have caused some sort of measurable harm and need to either take further measures to prevent people misunderstanding the accuracy of their tools, correct the response, or otherwise remedy the situation.

There's also some stickiness around who "owns" or is responsible for the GPT output content. In the general OpenAI terms of service, they say "OpenAI hereby assigns to you all its right, title and interest in and to Output. This means you can use Content for any purpose, including commercial purposes such as sale or publication, if you comply with these Terms..... You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms." So they are giving the user the ownership and responsibility over the output content. However in the "similarity of content" section, they say that they might give similar responses to other users and that those responses are not your content. "For example, you may provide input to a model such as “What color is the sky?” and receive output such as 'The sky is blue.' Other users may also ask similar questions and receive the same response. Responses that are requested by and generated for other users are not considered your Content." If GPT is giving similar disparaging or damaging responses about you to many different users, it could potentially be found in court that OpenAI are responsible for generating that repeat content rather than each of the individual users being responsible for it.

Obviously it's largely a novel legal issue without perfect precedent, and legal issues can always be ruled in many different ways depending on the arguments presented, the judge or jury presiding over the issue, etc. I think there will be many legal challenges related to AI, copyright, training data, misinformation, and more. Some may ultimately make their way up to higher courts for decision or have new regulations passed by congress (in America at least).


You raise compelling points, and it's true that AI models like ChatGPT sometimes "hallucinate," but not always. The line becomes blurrier when the training data contains defamatory statements, and the model simply repeats or paraphrases what it has learned. In such cases, determining liability could become more complex, as it would involve assessing the extent to which the AI model is fabricating information versus merely reflecting the information present in its training data.

To expand on the analogy, should an individual be held responsible for defamation when they are merely echoing information they've learned from a source and genuinely believe to be accurate? I don't think that should be the case, as their state of mind should play a role.

This issue is undoubtedly complex and will likely be clarified over time. In my opinion, the law should, at a minimum, differentiate between "unintentional" defamation and "intentional" defamation.


When you open your mouth and speak about someone, there is intent.

Whether you believe the words you speak or not is irrelevant. If you don't want to be accused of defamation, don't open your mouth and speak publicly about other people if you don't have confirmation of the truth of your words.


Is that how it works? So it's not safe to repeat anything you heard on the news? Or read online? Basically can't repeat anything that you didn't personally witness? So "the statement must have been made with knowledge that it was untrue or with reckless disregard for the truth" is not right? (or you have no idea of what you're talking about?)


> repeat anything

Isn't it obvious? Which of the following statements do you think John Doe might take action against you for:

  "John Doe is a great guy". 
  "John Doe isn't a great guy". 
  "John Doe defrauded the bank and stole money from business partners".
If the last one is false but you keep repeating it on your blog out of ignorance, have a guess what will happen?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: