this post was submitted on 13 Apr 2024
409 points (98.6% liked)

Technology

59569 readers
3825 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

you are viewing a single comment's thread
view the rest of the comments
[–] bionicjoey@lemmy.ca 4 points 7 months ago (1 children)

When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.

[–] General_Effort@lemmy.world -2 points 7 months ago (1 children)

What's happening here is just nothing like that. There is no amplifier. Images aren't run through a pipeline.

[–] bionicjoey@lemmy.ca 5 points 7 months ago (1 children)

The process of training is itself a pipeline

[–] General_Effort@lemmy.world -1 points 7 months ago (1 children)

Yes, but the model is the end of that pipeline. The image is not supposed to come out again. A model can "memorize" an image, but then you wouldn't necessarily expect an amplification of artifacts. Image generators are not supposed to d lossy compression, though the tech could be used for that.

[–] Grimy@lemmy.world 6 points 7 months ago* (last edited 7 months ago) (1 children)

If the image has errors that are hard to spot by the human eye and the model gets trained on these images, thoses errors that came about naturally on real data get amplified.

Its not a model killer but it is something to watch out for.

[–] General_Effort@lemmy.world -2 points 7 months ago (1 children)

Yes, if you want realism. But that's just one of the things that people look for. Personal preference.

[–] SomeGuy69@lemmy.world 5 points 7 months ago (1 children)

Invisible artifacts still cause result retardation, realistic or not. Like issue with fingers, shadows, eyes, colors etc.

[–] General_Effort@lemmy.world -2 points 7 months ago

"Retardation"? Seriously?