It would be interesting to see the whole transcript rather than cherry picked examples. The first inputs would be the most interesting.
> regulation
How would you regulate this tool? I have used ChatGPT as well to brainstorm a story for a text adventure, which was leaned on Steins;Gate: a guy who has paranoia, and becomes convinced that small inconsistencies in his life are evidence of a reality divergence.
I would not like to see these kind of capabilities to be removed. Rather, just don't give access to insane people? But that is impossible too. Got any better ideas to regulate this?
I'm sure the between the money and the talent, they can find a solution? I mean these LLM's are already capable of shutting down anything politically sensitive, borderline grey area, and outright illegal, right? So it's no so farfetched that they can figure out how to talk fewer people into psychosis / homicide / suicide.
I'm not going to pretend I"m smart enough to walk into OpenAI's offices and implement a solution today.. but completely dismissing the idea of regulating them seems insane. I'm sure the industrialists ~100 years ago thought they wouldn't be able to survive without child labor, paying workers in scrip, 100 hour work weeks, locking workers in tinder boxes, etc. but, survive they did despite the safety and labor regulations that were forced on them. OpenAI and co are no different, they'll figure it out and they'll survive. and if they don't, it's not because they had to stop and consider the impact of their product.
A girl that was my friend some years ago was having a psychotic episode once, and I told her that no one is following her, no one is monitoring her phone and she probably went schizo probably because of drug abuse. She told me I'm lying ans from the KGB; she went completely mad. I realize that this is actually dangerous for me and completely cut ties, although I sometimes browse one of her online profiles to see what she posts.
I don't think OpenAI should be liable for insane behavior of insane people.
> regulation
How would you regulate this tool? I have used ChatGPT as well to brainstorm a story for a text adventure, which was leaned on Steins;Gate: a guy who has paranoia, and becomes convinced that small inconsistencies in his life are evidence of a reality divergence.
I would not like to see these kind of capabilities to be removed. Rather, just don't give access to insane people? But that is impossible too. Got any better ideas to regulate this?