this post was submitted on 18 Sep 2024
443 points (94.2% liked)
Technology
59569 readers
4136 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Let's go, already!
How you can help: If you run a website and can filter traffic by user agent, get a list of the known AI scrapers agent strings and selectively redirect their requests to pre-generated AI slop. Regular visitors will see the content and the LLM scraper bots will scrape their own slop and, hopefully, train on it.
AI already long ago stopped being trained on any old random stuff that came along off the web. Training data is carefully curated and processed these days. Much of it is synthetic, in fact.
These breathless articles about model collapse dooming AI are like discovering that the sun sets at night and declaring solar power to be doomed. The people working on this stuff know about it already and long ago worked around it.
Both can be true.
Preserved and curated datasets to train AI on, gathered before AI was mainstream. This has the disadvantage of being stuck in time, so-to-speak.
New datasets that will inevitably contain AI generated content, even with careful curation. So to take the other commenter's analogy, it's a shit sandwich that has some real ingredients, and doodoo smeared throughout.
They're not both true, though. It's actually perfectly fine for a new dataset to contain AI generated content. Especially when it's mixed in with non-AI-generated content. It can even be better in some circumstances, that's what "synthetic data" is all about.
The various experiments demonstrating model collapse have to go out of their way to make it happen, by deliberately recycling model outputs over and over without using any of the methods that real-world AI trainers use to ensure that it doesn't happen. As I said, real-world AI trainers are actually quite knowledgeable about this stuff, model collapse isn't some surprising new development that they're helpless in the face of. It's just another factor to include in the criteria for curating training data sets. It's already a "solved" problem.
The reason these articles keep coming around is that there are a lot of people that don't want it to be a solved problem, and love clicking on headlines that say it isn't. I guess if it makes them feel better they can go ahead and keep doing that, but supposedly this is a technology community and I would expect there to be some interest in the underlying truth of the matter.