Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HIPAA anybody?

(1) they probably shouldn't even have that data

(2) they shouldn't have it lying around in a way that it an be attributed to particular individuals

(3) imagine that it leaks to the wrong party, it would make the hack of that Finnish institution look like child's play

(4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations

(5) I'm surprised it is that little; they claim such high numbers for their users that this seems low.

In the late 90's when ICQ was pretty big we experimented with a bot that you could connect to that was fed in the background by a human. It didn't take a day before someone started talking about suicide to it and we shut down the project realizing that we were in no way qualified to handle human interaction at that level. It definitely wasn't as slick or useful as ChatGPT but it did well enough and responded naturally (more naturally than ChatGPT) because there was a person behind it that could drive 100's of parallel conversations.

If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.



HIPAA only applies to covered healthcare entitites. If you walk into a McDonalds' and talk about your suicidal ideation with the cashier, that's not HIPAA covered.

To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.

Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.

That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.

**

As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.

Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.


So many misconceptions about HIPAA would disappear if people just took the effort to unpack the acronym.


Health Insurance Portability and Accountability Act, for us non Americans


Arguably, if you start giving answers to these kind of questions your chatbot just became a medical device.


Under Medical Device Regulation in the EU, the main purpose of the software needs to be medical for it to become a medical device. In ChatGPT's case, this is not the primary use case.

Same with fitness trackers. They aren't medical devices, because that's not their purpose, but some users might use them to track medical conditions.


There is nothing arguable about it. No it did not.


Then the McDonalds cashier also becomes a medical practitioner the moment they tell you that killing yourself isn't the answer. And if I tell my friend via SMS that I am thinking about suicide, do both our phones now also become HIPAA-covered medical devices?


What about a medicine book? Is that also a medical device?


I don't know about HIPAA, but that there is that little body of criminal legislature talking about unathorised practice of medicine ?


Privacy is vital, but this isn't covered under HIPAA. As they are not a covered entity nor handling medical records, they're beholden to the same privacy laws as any other company.

HIPAA's scope is actually basically nonexistent once you get away from healthcare providers, insurance companies, and the people that handle their data/they do business with. Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.


> Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.

Also not when the entity behaves as though they are a mental health service professional? At what point do you put the burden on the apparently mentally ill person to know better?


Google, OpenAI, Anthropic don't advertise any of their services as medical so why?

You Google your symptoms constantly. You read from WebMD or Wiki drug pages. None of these should be under HIPAA.


You're not putting the burden on them. They don't need to comply with HIPAA. But you can't just turn people into healthcare providers who aren't them and don't claim to be them.


That line of reasoning would just lead to every LLM message and every second comment on the internet starting with the sentence "this is not medical advice". It would do nothing but add another layer of noise to all communication


> if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations

For a lot of people, especially in poorer regions, LLMs are a mental health lifeline. When someone is severely depressed they can lay in bed the whole day without doing anything. There is no impulse, as if you tried starting a car and nothing happens at all, so you can forget about taking it to the mechanic in the first place by yourself. Even in developed countries you can wait for a therapist appointment for months, and that assumes you navigated a dozen therapists that are often not organized in a centralized manner. You will get people killed like this, undoubtedly.

LLMs are far beyond the point of leading people into suicidal actions, on the other hand. At the very least they are useful to bridge the gap between suicidal thoughts appearing and actually getting to see a therapist


  There is no impulse, as if you tried starting a car and nothing happens at all, so you can forget about taking it to the mechanic
That is a really great analogy.


Sure, but you could also apply this reasoning to a blank sheet of paper. But while it's absurd to hold the manufacturer of the paper accountable for what people write on it, it makes sense to hold OpenAI accountable for their chatbots encouraging suicide.


> HIPAA anybody?

Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?

> If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.

Cf. Eliza, or the Rogerian therapy it (crudely) mimics.


> Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?

That's a good question.

Intuitively: because it doesn't attempt to impersonate a medical professional, nor does it profess to interact with you on the subject matter at all. It's a communications medium, not an interactive service.


Doesn't GMail show ads that are context aware? (I'm not sure.)


I haven't seen an ad in 15 years or so, so I couldn't tell you.


> (4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations

There is nothing in the world that OpenAI is qualified to talk about, so we might as well just shut it down.


I'm in favor; any objections?


I object.


Tangent but now I’m curious about the bot, is there a write-up anywhere? How did it work? If someone says “hi”, what did the bot respond and what did the human do? I’m picturing ELIZA with templates with blanks a human could fill in with relevant details when necessary.


Basically Levenshtein on previous responses minus noise words. So if the response was 'close enough' then the bot would use a previously given answer, if it was too distant the human-in-the-loop would get pinged with the previous 5 interactions as context to provide a new answer.

Because the answers were structured as a tree every ply would only go down in the tree which elegantly avoided the bot getting 'stuck in a loop'.

The - for me at the time amazing, though linguists would have thought it trivial - insight was how incredibly repetitive human interaction is.


If there is somebody in current year that still thinks they would not store, process/train and use/sell all data, they do probably need to see a doctor.


As others have stated HIPAA applies to healthcare organizations.

Obligating everyone to keep voluntarily disclosed health statements confidential would be silly.

If I told you that I have a medical condition, right here on HN -- would it make sense to obligate you and everyone else here keep it a secret?


No, obviously it would not. But if we pretended to be psychiatrists or therapists then we should be expected to behave as such with your data if given to us in confidence rather than in public.


> we shut down the project realizing that we were in no way qualified to handle human interaction at that level

Ah, when people had a spine and some sense of ethics, before everything dissolved in a late stage capitalism all is for profit ethos. Even yourself is a "brand" to be monetised, even your body is to be sold.

We deserve our upcoming demise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: