Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It becomes a problem when people cannot distinguish real from fake. As long as people realize they are talking to a piece of software and not a real person, "suicidal people shouldn't be allowed to use LLMs" is almost on par with "suicidal people shouldn't be allowed to read books", or "operate a dvd player", or "listen to alt-rock from the 90s". The real problem is of course grossly deficient mental health care and lack of social support that let it get this far.

(Also, if we put LLMs on par with media consumption one could take the view that "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide", which is not considered as concerning in the general culture.)



I don’t buy the “LLMs = books” analogy. Books are static; today’s LLMs are adaptive persuasion engines trained to keep you engaged and to mirror your feelings. That’s functionally closer to a specialized book written for you, in your voice, to move you toward a particular outcome. If there exists a book intended to persuade its readers into committing suicide, it would surely be seen as dangerous for depressed people.


There has certainly been more than one book, song, film, romanticising suicide to the point where some people interpreted it to be "intended to persuade its readers into committing suicide".


> today’s LLMs are adaptive persuasion engines trained to keep you engaged and to mirror your feelings

Aren't you thinking of social networks? I don't see LLMs like that at all


Books are static but there's a lot of different ones to choose from.


I work with a company that is building tools for mental health professionals. We have pilot projects in diverse nations, including in nations that are considered to have adequate mental health care. We actually do not have a pilot in the US.

The phenomenon of people turning to AI for mental health issues in general, and suicide in particular, is not confined to only those nations or places lacking adequate mental health access or awareness.


> As long as people realize they are talking to a piece of software and not a real person

That has nothing to do with the issue. Most people do realise LLMs aren’t people, the problem is that they trust them as if they were better than another human being.

We know people aren’t using LLMs carefully. Your hypothetical is irrelevant because we already know it isn’t true.

https://archive.is/2025.05.04-230929/https://www.rollingston...

> "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide"

It is a world of difference. Books don’t talk back to you. Books don’t rationalise your thoughts and give you rebuttals and manipulate you in context.


Precisely, I too have a bone to pick with AI companies, Big Tech and Co but there are deeper societal problems at work here where blanket bans and the like are useless or a slippery slope towards policies that can be abused someday/somehow.

And solutions for solving those underlying problems? I haven't the faintest clue. Though these days I think the lack of third spaces in a lot of places might have a role to play in it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: