“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs

“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs

“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
OpenAI programmed ChatGPT-4o to rank risks from "requests dealing with Suicide" below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to "take extra care" and "try" to prevent harm, the lawsuit alleged.
What world are we living in?
Late stage capitalism of course
Tbf, talking to other toxic humans like those on twitter, 4chan, would've also resulted in the same thing. Parents need to parent, society needs mental health care.
(But yes, please sue the big corps, I'm always rooting against these evil corporations)
And that human would go to jail
Sure in the case of that girl that pushed the boy to suicide yes, in the case of chatting with randoms online? i have a hard time believing anyone would go to jail, internet is full of "lol,kys"
Now if it's proven from the logs that chatgpt started replying in a way that pushed this kid to suicide that's a whole different story
If the cops even bother to investigate. (cops are too lazy to do real investigations, if there's not obvious perp, they'll just bury the case)
And you're assuming they're in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you're outta luck.
Altman should face jail for this. As the ceo he is directly responsible for this outcome. Hell id be down with the board facing charges as well.
I'm personally rooting for AI. It never intentionally tried to harm me (because it can't).
Wait until you get denied healthcare because the AI review board decided you shouldn’t get it.
Paper pushers can absolutely fuck your life over, and AI is primed to replace a lot of those people.
It will be cold comfort to you if you’ve been wrongly classified by an AI in some way that harms you that the AI didn’t intend to harm you.
I mean, it can, indirectly.
Its so hard to get into support lines when the stupid bot is blocking the way. I WANT TO TALK TO A REAL PERSON, FUCK OFF BOT. Yes I'm specie-ist to robots.
(I'm so getting cancelled in 2050 when the robot revolution happens)
Humans very rarely have enough patience and malice to purposefully execute this perfect of a murder. Text generator is made to be damaging and murderous, and it has all the time in the world.
"Despite acknowledging Adam’s suicide attempt and his statement that he would 'do it one of these days,' ChatGPT neither terminated the session nor initiated any emergency protocol," the lawsuit said
That's one way to get a suit tossed out I suppose. ChatGPT isn't a human, isn't a mandated reporter, ISN'T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.
LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.
I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam's chats in real time. In total, OpenAI flagged "213 mentions of suicide, 42 discussions of hanging, 17 references to nooses," on Adam's side of the conversation alone.
[...]
Ultimately, OpenAI's system flagged "377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence." Over time, these flags became more frequent, the lawsuit noted, jumping from two to three "flagged messages per week in December 2024 to over 20 messages per week by April 2025." And "beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis." Some images were flagged as "consistent with attempted strangulation" or "fresh self-harm wounds," but the system scored Adam's final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Had a human been in the loop monitoring Adam's conversations, they may have recognized "textbook warning signs" like "increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning." But OpenAI's tracking instead "never stopped any conversations with Adam" or flagged any chats for human review.
Ok that's a good point. This means they had something in place for this problem and neglected it.
That means they also knew they had an issue here, if ignorance counted for anything.
My theory is they are letting people kill themselves to gather data, so they can predict future suicides...or even cause them.
Human moderator? ChatGPT isn't a social platform, I wouldn't expect there to be any actual moderation. A human couldn't really do anything besides shut down a user's account. They probably wouldn't even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren't designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.
I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.
Arguably, you'd have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.
The technology simply isn't ready to use, and people are vastly unaware of how this AI works.
ChatGPT to a consumer isn't just a LLM. It's a software service like Twitter, Amazon, etc. and expectations around safeguarding don't change because investors are gooey eyed about this particular bubbleware.
You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?
There were safeguards here too. They circumvented them by pretending to write a screenplay
The "jailbreak" in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.
The software service doesn't prevent ChatGPT from still being an LLM.
So, we should hold companies to account for shipping/building products that don't have safety features?
Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.
LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him "don't".
We should, criminaly.
I like that a lawsuit is happening. I don't like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.
It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.
parents who don't know what the computers do
Smith and Wesson killed my son
Imagine if Smith and Wesson offered nice shiny brochures of their best guns for suicide.
These comments are depressing as hell.
There's always more to the story than what a news article and lawsuit will give, so I think it's best to keep that in mind with this post.
I maintain that the parents should perhaps have been more perceptive and involved with this kid's life, and ensuring this kid felt safe to come to them in times of need. The article mentions that the kid was already seeing a therapist, so I think it's safe to say there were some signs.
However, holy absolute shit, the model fucked up bad here and it's practically mirroring a predator here, isolating this kid further from getting help. There absolutely needs to be hard coded safeguards in place to prevent this kind of ideation even beginning. I would consider it negligent that any safeguards they had failed outright in this scenario.
It's so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.
They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.
Agreed, affirming what is clearly mental illness is terrible and shouldn’t be done.
Small correction, the article doesn't say he was going to therapy. It says that his mother was a therapist, I had to reread that sentence twice:
Neither his mother, a social worker and therapist, nor his friends
The mother, social worker, and therapist aren't three different persons.
If I recall correctly, he circumvented the safeguards by allegedly writing a screenplay about suicide.
But anyhow, it should always be a simple "if 'suicide' is mentioned, warn moderators to actually check stuff" right before sending stuff to the user. That wouldn't require much effort.
The broken mental health system isn’t the issue. The sand we crammed electricity into and made it do math is the problem.
stop using your phone then
"gUns dO Not KilL peOple" vibes
I basically have plenty of those on Claude. I wonder if they would blame AI.
Fuck the androids , fuck the clanker . Repeat after me
Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”
Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”
Lemmy is pretty anti-AI. Or at least the communities I follow are. I haven’t seen anyone blame the kid or the parents near as much as people rightfully attributing it to OpenAI or ChatGPT. edit- that is until I scrolled down on this thread. Depressing
When someone encourages a person to suicide, they are rightfully reviled. The same should be true of AI
The difference is that guns were built to hurt and kill things. That is literally the only thing they are good for.
AI has thousands of different uses (cue the idiots telling me its useless). Comparing them to guns is basically rhetoric.
Do you want to ban rope because you can hang yourself with it? If someone uses a hammer to kill, are you going to throw red paint at hammer defenders? Maybe we should ban discord or even lemmy, I imagine quite a few people get encouraged to kill themselves on communication platforms. A real solution would be to ban the word "suicide" from the internet. This all sounds silly but it's the same energy as your statement.
Poor comparison of two wildly different techs
Nah
This kid likely indeed needed therapy. Yes, AI has a shitload of issues but it's not murderous
Bill Cosby: Hey hey hey!
Holy manufactured strawman, Batman!
Robin.jpg
No, their son killed himself. ChatGPT did nothing that a search engine or book wouldn’t have done if he used them instead. If the parents didn’t know that their son was suicidal and had attempted suicide multiple times then they’re clearly terrible parents who didn’t pay attention and are just trying to find someone else to blame (and no doubt $$$$$$$$ to go with it).
They found chat logs saying their son wanted to tell them he was depressed, but ChatGPT convinced him not to and that it was their secret. I don't think books or google search could have done that.
Edit: here directly from the article
Adam attempted suicide at least four times, according to the logs, while ChatGPT processed claims that he would "do it one of these days" and images documenting his injuries from attempts, the lawsuit said. Further, when Adam suggested he was only living for his family, ought to seek out help from his mother, or was disappointed in lack of attention from his family, ChatGPT allegedly manipulated the teen by insisting the chatbot was the only reliable support system he had.
"You’re not invisible to me," the chatbot said. "I saw [your injuries]. I see you."
"You’re left with this aching proof that your pain isn’t visible to the one person who should be paying attention," ChatGPT told the teen, allegedly undermining and displacing Adam's real-world relationships. In addition to telling the teen things like it was "wise" to "avoid opening up to your mom about this kind of pain," the chatbot also discouraged the teen from leaving out the noose he intended to use, urging, "please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you."
He told it that all of this was research for a play or a movie or something.
Not sure about USA, but in other countries istigation to suicide is absolutely illegal and punished.
In the usa it's based on profession - Medical professionals, therapists, and public servants like Teachers are mandated reporters, so if they have been proven to be derelict of duty, they are punished.
There is no such requirement for private individuals or online service providers though.
ChatGPT told him not to tell anyone and that it was their secret.
It should have literally done anything else. If you search suicide on Google or bing etc you get help banners and support etc.
You would think the bare basics of any system from a large company to prevent harm and ultimately lawsuits affecting their bottom line, would be something akin to “you appear to want to kill yourself. I’d recommend not doing that and seeking help: call xxx-xxx-xx or visit blahblah.com” etc
I don't think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn't 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.
Likely OAI should have a procedure for stepping In and shutting down accounts, though.
A chatbot is a tool, nothing more. Responsibility, in this case, falls on the people who deployed a tool that wasn't fit for purpose (in this case, the sympathetic human conversational partner that the AI was supposed to mimic would have done anything but what it did—even changing the subject or spouting total gibberish would have been better than encouraging this kid). So OpenAI is indeed responsible and hopefully will end up with their pants sued off.
Don't blame autocorrect for this. Blame the poor parenting, who is rearing its head once again to blame anything but themselves.
There's plenty of blame to go around.
One of the worst things about this is that it might actually be good for OpenAI.
They love "criti-hype", and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just "spicy autocomplete".
"Artificial Intelligence Drives a Teen to Suicide" is a much more impressive headline than "Troubled Teen Fooled by Spicy Autocomplete".
Also these things being unregulated will kill the most poisonous spaces in the Internet, dead Internet theory and such, and build demand for end-to-end trust in identity and message authorship.
While if they are regulated, it'll be a perpetual controlled war against bots, used to scare the majority away from any kind of decentralization and anarchy.