this post was submitted on 08 Jan 2024
405 points (96.1% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

you are viewing a single comment's thread
view the rest of the comments
[–] General_Effort@lemmy.world 12 points 10 months ago (1 children)

It doesn't work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.

For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that's the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.

Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.

In the main, these lawsuits won't go anywhere. I don't want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.

[–] SheeEttin@programming.dev -2 points 10 months ago (2 children)

Generally you're correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they're less relevant here.)

So yeah, tracing a comic book to learn drawing is totally fine, as long as that's what you're doing it for. Tracing a comic to reproduce and sell is totally not fine, and that's basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.

[–] ricecake@sh.itjust.works 3 points 10 months ago (1 children)

What about the case where you're tracing a comic to learn how to draw with the intent of using the new skills to compete with who you learned from?

Point of the question being, they're not processing the images to make exact duplicates like tracing would.
It's significantly closer to copying a style, which you can't own.

[–] Eccitaze@yiffit.net 1 points 10 months ago (1 children)

Still a copyright violation, especially if you make it publicly available and claim the work as your own for commercial purposes. At the very minimum, tracing without fully attributing the original work is considered to be in poor enough taste that most art sites will permaban you for doing it, no questions asked.

[–] ricecake@sh.itjust.works 1 points 10 months ago

In the analogy being developed though, they're not making it available.
The initial argument was that tracing something to practice and learn was fine.

Which is why I said, what if you trace to practice, and then draw something independent to try to compete?

To remove the analogy: most generative AI systems don't actually directly reproduce works unless you jump through some very specific and questionable hoops. (If and when they do, that's a problem and needs to not happen).

A lot of the copyright arguments boil down to "it's wrong for you to look at this picture for the wrong reasons", or to wanting to build a protectionist system for creators.

It's totally legit to want to build a protectionist system, but it feels disingenuous to argue that our current system restricts how freely distributed content is used beyond restrictions on making copies or redistribution.

[–] General_Effort@lemmy.world 1 points 10 months ago

I meant "learning" in the strict sense, not institutional education.

I think you are simply mistaken about what AI is typically doing. You can test your "tracing" analogy by making an image with Stable Diffusion. It's trained only on images from the public internet, so if the generated image is similar to one in the training data, then a reverse image search should turn it up.