this post was submitted on 18 Sep 2024
443 points (94.2% liked)

Technology

59534 readers
3143 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BlackLaZoR@fedia.io 29 points 2 months ago (2 children)

So they made garbage AI content, without any filtering for errors, and they fed that garbage to the new model, that turned out to produce more garbage. Incredible discovery!

[–] RunningInRVA@lemmy.world 19 points 2 months ago (2 children)

Indeed. They discovered that:

shit in = shit out.

[–] homesweethomeMrL@lemmy.world 3 points 2 months ago (1 children)

A fifty year old maxim, to be clear. They “just now” “found that out”.

Biggest. Scam. Evar.

[–] stephen01king@lemmy.zip 1 points 2 months ago

Who just found that out?

[–] pennomi@lemmy.world 7 points 2 months ago (2 children)

Yeah, in practice feeding AI its own outputs is totally fine as long as it’s only the outputs that are approved by users.

[–] Bezier@suppo.fi 4 points 2 months ago (1 children)

I would expect some kind of small artifacting getting reinforced in the process, if the approved output images aren't perfect.

[–] pennomi@lemmy.world 5 points 2 months ago (1 children)

Only up to the point where humans notice it. It’ll make AI images easier to detect, but still pretty for humans. Probably a win-win.

[–] Bezier@suppo.fi 3 points 2 months ago

Didn't think of that, good point.

The inbreeding could also affect larger decisions in sneaky ways, like how it wants to compose the image. It would be bad if the generator started to exaggerate and repeat some weird ai tropes.

[–] WalnutLum@lemmy.ml 1 points 2 months ago

I don't know if thinking that training data isn't going to be more and more poisoned by unsupervised training data from this point on counts as "in practice"