Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is awful, but at the same time this isn't new. People have for a long time used Google searches to self diagnose their issues. ChatGPT just makes that process ever easier now.

From my viewpoint it speaks more to a problem of the healthcare system.



I agree with everything you said but chatGPT does have an insidious thing where it confirms your biases. It kind of senses what you want and actually runs with it. For truly unbiased responses you literally have to hide your intention from chatGPT. But even so chatGPT can many times still sense your intent and give you a biased answer.


There is something recently (for a few months?) which has made AI extremely sycophantic that it actually drives me crazy

You are completely right... insert some emoji

Shaking my head.

It would be an interesting experiment to see models which aren't sycophantic being used

as such, What is the least sycophantic LLM model if I may ask?


Even before google people got books to self-diagnose problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: