363
this post was submitted on 27 Aug 2025
363 points (96.7% liked)
Technology
74524 readers
3739 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's always more to the story than what a news article and lawsuit will give, so I think it's best to keep that in mind with this post.
I maintain that the parents should perhaps have been more perceptive and involved with this kid's life, and ensuring this kid felt safe to come to them in times of need. The article mentions that the kid was already seeing a therapist, so I think it's safe to say there were some signs.
However, holy absolute shit, the model fucked up bad here and it's practically mirroring a predator here, isolating this kid further from getting help. There absolutely needs to be hard coded safeguards in place to prevent this kind of ideation even beginning. I would consider it negligent that any safeguards they had failed outright in this scenario.
It's so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.
They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.
Agreed, affirming what is clearly mental illness is terrible and shouldn’t be done.
Small correction, the article doesn't say he was going to therapy. It says that his mother was a therapist, I had to reread that sentence twice:
The mother, social worker, and therapist aren't three different persons.
If I recall correctly, he circumvented the safeguards by allegedly writing a screenplay about suicide.
But anyhow, it should always be a simple "if 'suicide' is mentioned, warn moderators to actually check stuff" right before sending stuff to the user. That wouldn't require much effort.