Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It does not pass the "friend test" in that if a human friend were to make such comments instead of ChatGPT making them, the human friend would be within his free speech rights to have made them. As such, I don't see any valid legal issue here affecting ChatGPT that should stand in court. I see possible ethical and objectivity issues, but not a valid legal issue.




> It does not pass the "friend test" [...] not a valid legal issue.

What legal doctrine is that, and can you point towards precedent? Or is it one of those "I feel like the law should" situations?


Yes, it is called free speech, as is already duly noted in my parent comment which you may read again. In fact, the responsibility to note a legal doctrine of wrongdoing is entirely yours.

That’s not what free speech means.

Free speech absolutely does allow assigning blame, whether correctly or incorrectly. It also allows suggesting criminal action at some point in the future, just not imminently.

It depends on the specifics of what was said. As the complaint states, OpenAI has yet to release the full transcripts.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: