the big difference is the question whether people are taking the experience as fact or fiction. we all know that DnD is fiction, and that we play a character in it. if LLMs were treated the same, they would probably be just as harmless.
but are users treating LLMs as interactive fiction devices or not? as it looks like now they are not.
An LLM chat assistant is playing a role no matter what unless you think there is a real human behind it. They are role playing all the way down (and you can set up sillytavern characters for want to customize their role).
Similar issues are being addressed between reality and unreality. Did the person think they were talking to a real person? Did they understand the difference between fantasy or not? The people worries about DnD in 1980 aren’t very different at all from those worries about AI in 2025. There have also been lots of other things to blame for teenage suicide in between run and today, like violent video games or social media.
ChatGPT is marketed as a tool to assist with real-world scenarios like looking up information, vacation planning, and other non-fiction scenarios.
Why do you find it surprising to find someone may expect to utilize the tool in a non-fictional way or that someone could interpret it’s output as non-fiction?
It’s unreasonable to apply this bizarre standard of “it should be treated as fiction only when I want it to be”
but with DnD the worries came only from people who were not familiar with the game. the players all knew (and know) that it is all fictional. the worries around DND were easy to dispel by just familiarizing yourself with the game and the players. the evidence against LLMs is looking much more serious.
but are users treating LLMs as interactive fiction devices or not? as it looks like now they are not.