this post was submitted on 26 Jan 2024
430 points (83.1% liked)

Technology

59605 readers
3435 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.

you are viewing a single comment's thread
view the rest of the comments
[–] KinNectar@kbin.run 65 points 10 months ago (22 children)

Copyright issues aside, can we talk about how this implies accurate recall of an image from a never before achievable data compression ratio? If these models can actually recall the images they have been fed this could be a quantum leap in compression technology.

[–] linearchaos@lemmy.world 9 points 10 months ago (2 children)

I was thinking about this back when they first started talking about news articles coming back word for word.

There's no way for us to tell how much of the original data even in a lossy fashion can be directly recovered. If this was as common as these articles would leave you to believe you just be able to pull anything you wanted out on demand.

But here we have every news agency vying to make headlines about copyright infringement and we're seeing an article here and there with a close or relatively close result

There are millions and millions of people using this technology and most of us aren't running across blatant full screen reproductions of stuff.

You can tell from some of the artifacts that they've trained from some watermark images because the watermarks kind of show up but for the most part you wouldn't know who made the watermarking if all the watermarking companies didn't use rather unique patterns.

The image that we're seeing on this news site of the joker is quite exceptional, even from a lossy standpoint, but honestly it's just feeding the confirmation bias.

[–] mindlesscrollyparrot@discuss.tchncs.de 1 points 10 months ago (1 children)

"how much of the data is the original data"?

Even if you could reverse the process perfectly, what you would prove is that something fed into the AI was identical to a copyrighted image. But the image's license isn't part of that data. The question is: did the license cover use as training data?

In the case of watermarked images, the answer is clearly no, so then the AI companies have to argue that only tiny parts of any given image come from any given source image, so it still doesn't violate the license. That's pretty questionable when waternarks are visible.

In these examples, it's clear that all parts of the image come directly or indirectly (perhaps some source images were memes based on the original) from the original, so there goes the second line of defence.

The fact that the quality is poor is neither here nor there. You can't run an image through a filter that adds noise and then say it's no longer copyrighted.

[–] wewbull@iusearchlinux.fyi -1 points 10 months ago

The trained model is a work derived from masses of copywrite material. Distribution of that model is infringement, same as distributing copies of movies. Public access to that model is infringement, just as a public screening of a movie is.

People keep thinking it's "the picture the AI drew" that's the issue. They're wrong. It's the "AI" itself.

load more comments (19 replies)