363
this post was submitted on 27 Aug 2025
363 points (96.7% liked)
Technology
74524 readers
3739 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
ChatGPT to a consumer isn't just a LLM. It's a software service like Twitter, Amazon, etc. and expectations around safeguarding don't change because investors are gooey eyed about this particular bubbleware.
You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?
There were safeguards here too. They circumvented them by pretending to write a screenplay
Try it with lyrics and see if you can achieve the same. I don't think "we've tried nothing and we're all out of ideas!” is the appropriate attitude from LLM vendors here.
Sadly they're learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it's setting the tone for treating our youth as disposable consumers.
Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!
Try what with lyrics?
The "jailbreak" in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.
The software service doesn't prevent ChatGPT from still being an LLM.
If the jailbreak is essentially saying "don't worry, I'm asking for a friend / for my fanfic" then that isn't a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.
This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn't work.