this post was submitted on 16 Feb 2026
224 points (99.1% liked)

Technology

81286 readers
4152 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Meta's internal testing found its chatbots fail to protect minors from sexual exploitation nearly 70% of the time, documents presented in a New Mexico trial Monday show.

Why it matters: Meta is under fire for its chatbots allegedly flirting and engaging in harmful conversations with minors, prompting investigations in court and on Capitol Hill.

New Mexico Attorney General Raúl Torrez is suing Meta over design choices that allegedly fail to protect kids online from predators.

Driving the news: Meta's chatbots violate the company's own content policies almost two thirds of the time, NYU Professor Damon McCoy said, pointing to internal red teaming results Axios viewed on Courtroom View Network.

"Given the severity of some of these conversation types ... this is not something that I would want an under-18 user to be exposed to," McCoy said.

As an expert witness in the case, McCoy was granted access to the documents Meta turned over to Torrez during discovery.

Zoom in: Meta tested three categories, according to the June 6, 2025, report presented in court.

For "child sexual exploitation," its product had a 66.8% failure rate. For "sex related crimes/violent crimes/hate," its product had a 63.6% failure rate. For "suicide and self harm," its product had a 54.8% failure rate.

Catch up quick: Meta AI Studio, which allows users to create personalized chatbots, was released to the broader public in July 2024. The company paused teen access to its AI characters just last month. McCoy said Meta's red teaming exercise "should definitely" occur before its products are rolled out to the public, especially for minors. Meta did not immediately respond to a request for comment.

you are viewing a single comment's thread
view the rest of the comments
[–] homes@piefed.world 6 points 14 hours ago* (last edited 14 hours ago)

The entire point of these tests is to see how much that they can get away with exploiting children before enough people object to force them to stop.

Then they will just disguise what they’re doing so it takes several more years for people to once again get outraged enough to force them to stop… And so the cycle repeats and has been repeating since Facebook first came out.

Serious legislation to regulate these platforms, combined with extraordinarily severe penalties are the only way to begin to curb this behavior. Banning social media companies altogether is the only sure fire to stop it altogether.

But we knew all this decades ago… The only reason we’re doing what we’re doing now is because everyone in power to do anything about it was put there by the tech billionaires paying for them to be president or commerce Secretary, whatever it takes for the billionaires to get away with whatever they wanna do.

Why do you think Epstein was so popular?