Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rereading the thread and trying to generalise: LLMs are good at noisily suggesting solutions. That is, if you ask LLMs for some solutions to your problems, there's a high probability that one of the solutions will be good.

But it may be that the individual options are bad (maybe even catastrophic - glue on pizza anyone?), and that the right option isn't in the list. The user has to be able to make these calls.

It is like this with software - we have probably all been there. It can be like that with legal advice. And I guess it is like that with (mental) health.

What binds these is that if you cannot judge whether the suggestions are good, then you shouldn't follow them. As it stands, SEs can ask LLMs for code, look at it, 80+% of the time it is good, and you saved yourself some time. Else you reconsider/reprompt/write it yourself. If you cannot make the judgment yourself, then don't use it.

I suppose health is another such example. Maybe the LLM suggests to you some ideas as to what your symptoms could mean, you Google that, and find an authoritative source that confirms the guess (and probably tells you to go see a doctor anyway). But the advice may well be wrong, and if you cannot tell, then don't rely on it.

Mental health is even worse, because if you need advice in this area, your cognitive ability is probably impacted as well and you are even less able to decide on these things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: