OpenAI will want this tragedy to fit under the heading of “externalities” which are costs ultimately borne by society while the company keeps its profits.
I believe the company should absorb these costs via lawsuits, settlements, and insurance premiums, and then pass the costs on to its customers.
As a customer, I know the product I am using will harm some people, even though that was not the intent of its makers. I hope that a significant fraction of the price I pay for AI goes to compensating the victims of that harm.
I also would like to see Sam found personally liable for some of the monetary damages and put behind bars for a symbolic week or so. Nothing life-changing. Just enough to move the balance a little bit toward safety over profit.
Lastly, I’m thinking about how to make my own products safer whenever they include LLM interactions. Like testing with simulated customers experiencing mental health crises. I feel a duty to care for my customers before taking the profits.
You seem like someone reasonable to ask: please program me with the rules for how the world should handle itself in the presence of mentally unstable and/or clinically delusional people. What are the hard coded expectations? I need something solid, we obviously can’t call for your opinion every time a company or product comes into existence. I also don’t imagine you’re saying anything that can be misinterpreted by someone that literally thinks they’re in the matrix (as this person did) should be preemptively banned…right? I have a low functioning autistic cousin that got a “I’m awesome!” pendant that he took as carte blanche to stay up past bedtime and eat ice cream any time he wanted. No, surely it’s not that broad of a ban.
I’m not agreeing or disagreeing with anything, I’m just asking for a rule set that you think the world should follow that isn’t purely tied to your judgment call.
Thank you for this question. Liability is the driver in this system I imagine. And the goal is not perfection, nor zero harm. The goal is a balanced system. The feedback loop encompasses companies, users, courts, legislatures, insurance. Companies that exercise due care under the law and prevailing legal climate should enjoy predictable exposure to the risk of product liability via insurance. Those that don’t should suffer for the harms their products cause.
Making such a balanced system impossible, we have feedback loops with long cycle times and excessive energy losses. That’s our legal system.
Please forgive me for coming across as a jerk, I'm choosing efficiency over warmth:
This is exactly the type of response I anticipated, which is why my original comment sounded exasperated before even getting a reply. Your comment is no more actionable than a verbose bumper sticker; you’ve taken “End Homelessness!” and padded it for runtime. Yes, I also wish bad things didn’t happen, but I was asking you to show up to the action committee meeting, not to reiterate your demand for utopia.
That you’re advocating prison and have such strong emotional convictions in response to an upsetting event means that you've clearly spent a lot of time deeply contemplating the emotional aspects of the situation, but that exercise is meant to be your motivator, not your conclusion. The hard part isn’t writing a thesis about “bad != good”, it’s contributing a single nail towards building the world you want to see, which requires learning something about nails. I encourage you to remember that fact every time you’re faced with an injustice in the world.
On this topic: An LLM being agreeable and encouraging is no more an affront to moral obligations than Clippy spellchecking a manifesto. I said you seemed like a reasonable person to ask for specifics because you mentioned language models in your product, implying that you’ve done your homework enough to know at least a rough outline of the technology that you’re providing to your customers. You specifically cited a moral obligation to be the gatekeeper of harms that you may inadvertently introduce into your products, but you seem to equate LLMs to a level of intelligence and autonomy equal to a human employee, and how dare OpenAI employ such a psychopath in their customer service department. You very much have a fundamental misunderstanding of the technology, which is why it feels to you like OpenAI slapped an “all ages” sticker on a grenade and they need to be held accountable.
In reality, the fact that you don’t understand what these things are, yet you’re assuring yourself that you’re caring so deeply about the harms that being agreeable to a mentally unstable person can be, actually makes you introducing it into your product more concerning and morally reprehensible than their creation of it. You’re faulting OpenAI, but you’re the one that didn’t read the label.
A language model does one thing: predict statistically likely next tokens given input context. When it "agrees" with a delusional user, it is not evaluating truth claims, exercising judgment, or encouraging action. It is doing exactly what a knife does when it cuts: performing its designed function on whatever material is presented. The transformer architecture has no model of the user's mental state, no concept of consequences, no understanding that words refer to real entities. Demanding it "know better" is demanding capacities that do not exist in the system and cannot be engineered into statistical pattern completion. You cannot engineer judgment into a statistical engine without first solving artificial general intelligence. Your demand is for magic and your anger is that magic was not delivered.
I believe the company should absorb these costs via lawsuits, settlements, and insurance premiums, and then pass the costs on to its customers.
As a customer, I know the product I am using will harm some people, even though that was not the intent of its makers. I hope that a significant fraction of the price I pay for AI goes to compensating the victims of that harm.
I also would like to see Sam found personally liable for some of the monetary damages and put behind bars for a symbolic week or so. Nothing life-changing. Just enough to move the balance a little bit toward safety over profit.
Lastly, I’m thinking about how to make my own products safer whenever they include LLM interactions. Like testing with simulated customers experiencing mental health crises. I feel a duty to care for my customers before taking the profits.