Please forgive me for coming across as a jerk, I'm choosing efficiency over warmth:
This is exactly the type of response I anticipated, which is why my original comment sounded exasperated before even getting a reply. Your comment is no more actionable than a verbose bumper sticker; you’ve taken “End Homelessness!” and padded it for runtime. Yes, I also wish bad things didn’t happen, but I was asking you to show up to the action committee meeting, not to reiterate your demand for utopia.
That you’re advocating prison and have such strong emotional convictions in response to an upsetting event means that you've clearly spent a lot of time deeply contemplating the emotional aspects of the situation, but that exercise is meant to be your motivator, not your conclusion. The hard part isn’t writing a thesis about “bad != good”, it’s contributing a single nail towards building the world you want to see, which requires learning something about nails. I encourage you to remember that fact every time you’re faced with an injustice in the world.
On this topic: An LLM being agreeable and encouraging is no more an affront to moral obligations than Clippy spellchecking a manifesto. I said you seemed like a reasonable person to ask for specifics because you mentioned language models in your product, implying that you’ve done your homework enough to know at least a rough outline of the technology that you’re providing to your customers. You specifically cited a moral obligation to be the gatekeeper of harms that you may inadvertently introduce into your products, but you seem to equate LLMs to a level of intelligence and autonomy equal to a human employee, and how dare OpenAI employ such a psychopath in their customer service department. You very much have a fundamental misunderstanding of the technology, which is why it feels to you like OpenAI slapped an “all ages” sticker on a grenade and they need to be held accountable.
In reality, the fact that you don’t understand what these things are, yet you’re assuring yourself that you’re caring so deeply about the harms that being agreeable to a mentally unstable person can be, actually makes you introducing it into your product more concerning and morally reprehensible than their creation of it. You’re faulting OpenAI, but you’re the one that didn’t read the label.
A language model does one thing: predict statistically likely next tokens given input context. When it "agrees" with a delusional user, it is not evaluating truth claims, exercising judgment, or encouraging action. It is doing exactly what a knife does when it cuts: performing its designed function on whatever material is presented. The transformer architecture has no model of the user's mental state, no concept of consequences, no understanding that words refer to real entities. Demanding it "know better" is demanding capacities that do not exist in the system and cannot be engineered into statistical pattern completion. You cannot engineer judgment into a statistical engine without first solving artificial general intelligence. Your demand is for magic and your anger is that magic was not delivered.
I read every word of the complaint before commenting. Nothing has been proved but the causes of action appeared valid to me. I believe in innovation balanced by care and consequences delivered through liability.
You wrote assumptions about me personally which I won't address. I don't see where you've responded to anything that I said, nor to anything in the complaint, so I have nothing to respond to.
This is exactly the type of response I anticipated, which is why my original comment sounded exasperated before even getting a reply. Your comment is no more actionable than a verbose bumper sticker; you’ve taken “End Homelessness!” and padded it for runtime. Yes, I also wish bad things didn’t happen, but I was asking you to show up to the action committee meeting, not to reiterate your demand for utopia.
That you’re advocating prison and have such strong emotional convictions in response to an upsetting event means that you've clearly spent a lot of time deeply contemplating the emotional aspects of the situation, but that exercise is meant to be your motivator, not your conclusion. The hard part isn’t writing a thesis about “bad != good”, it’s contributing a single nail towards building the world you want to see, which requires learning something about nails. I encourage you to remember that fact every time you’re faced with an injustice in the world.
On this topic: An LLM being agreeable and encouraging is no more an affront to moral obligations than Clippy spellchecking a manifesto. I said you seemed like a reasonable person to ask for specifics because you mentioned language models in your product, implying that you’ve done your homework enough to know at least a rough outline of the technology that you’re providing to your customers. You specifically cited a moral obligation to be the gatekeeper of harms that you may inadvertently introduce into your products, but you seem to equate LLMs to a level of intelligence and autonomy equal to a human employee, and how dare OpenAI employ such a psychopath in their customer service department. You very much have a fundamental misunderstanding of the technology, which is why it feels to you like OpenAI slapped an “all ages” sticker on a grenade and they need to be held accountable.
In reality, the fact that you don’t understand what these things are, yet you’re assuring yourself that you’re caring so deeply about the harms that being agreeable to a mentally unstable person can be, actually makes you introducing it into your product more concerning and morally reprehensible than their creation of it. You’re faulting OpenAI, but you’re the one that didn’t read the label.
A language model does one thing: predict statistically likely next tokens given input context. When it "agrees" with a delusional user, it is not evaluating truth claims, exercising judgment, or encouraging action. It is doing exactly what a knife does when it cuts: performing its designed function on whatever material is presented. The transformer architecture has no model of the user's mental state, no concept of consequences, no understanding that words refer to real entities. Demanding it "know better" is demanding capacities that do not exist in the system and cannot be engineered into statistical pattern completion. You cannot engineer judgment into a statistical engine without first solving artificial general intelligence. Your demand is for magic and your anger is that magic was not delivered.