this post was submitted on 02 Nov 2025
124 points (99.2% liked)

Not The Onion

18556 readers
697 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

cross-posted from: https://sh.itjust.works/post/49077840

Artificial intelligence (AI) chatbots are worse at retrieving accurate information and reasoning when trained on large amounts of low-quality content, particularly if the content is popular on social media1, finds a preprint posted on arXiv on 15 October.

In data science, good-quality data need to meet certain criteria, such as being grammatically correct and understandable, says co-author Zhangyang Wang, who studies generative AI at the University of Texas at Austin. But these criteria fail to capture differences in content quality, he says.

you are viewing a single comment's thread
view the rest of the comments
[–] foggy@lemmy.world 23 points 6 days ago* (last edited 6 days ago)

😯

You mean to tell me the AI conversations with other AIs that another AI determined was maximized to engage readers with actions thatd keep them on the platform (made the angry) led to some recursive spiral antithetical to critical thinking???

🙀