Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.

Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.

If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).

The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.



You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.


Or start a “temporary” chat.


> He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time

He literally wrote that. I asked how he knows it's the right direction.

it must be treatment worked. otherwise it is more or less just a hunch

people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...

without more info this is not evidence.


We had the diagnosis before I started with the LLM. The radiology showed the problem and the surgeries worked. Down to an ultrasound a year now!

We took him to our local ER, they ran tests. I gave the LLM just the symptom list that I gave to the ER initially. It replied with all the things they tested for. I gave the test results, and it suggested a short list with diagnostics that included his actual Dx and the correct way to test for it.


That's great that it worked so quickly too (and you could arrange surgery on such short notice). I'm sure more people with same issue may benefit from more details?


As it's my son's health issues, not my own, I'm going to preserve his privacy until he can consent to sharing in...8 years.


Apparently this news is just that it's "not allowed" by ToS, but ChatGPT will still give advice so it doesn't really matter. I thought the new model already denied you advice because you said you used an older model, but I guess it was unrelated to this news.

By the way unless you used an anonymous mode I wonder how much the model knew from side channels that could contribute to suggesting correct diagnosis...


This was before I started using it as anything but a toy. I had definitely not spoken about medical issues; it was on a whim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: