this post was submitted on 09 Jan 2024
528 points (98.2% liked)

Technology

59589 readers
2910 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

you are viewing a single comment's thread
view the rest of the comments
[–] dhork@lemmy.world 85 points 10 months ago* (last edited 10 months ago) (4 children)

¿Porque no los dos?

I don't understand why people are defending AI companies sucking up all human knowledge by saying "well, yeah, copyrights are too long anyway".

Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there's still a ton of recent works that AI are using without any compensation to their creators.

I think it's because people are taking this "intelligence" metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn't human, it's just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don't care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.

[–] deweydecibel@lemmy.world 23 points 10 months ago* (last edited 10 months ago) (1 children)

It shouldn't be cheap to absorb and regurgitate the works of humans the world over in an effort to replace those humans and subsequently enrich a handful of silicon valley people.

Like, I don't care what you think about copyright law and how corporations abuse it, AI itself is corporate abuse.

And unlike copyright, which does serve its intended purpose of helping small time creators as much as it helps Disney, the true benefits of AI are overwhelmingly for corporations and investors. If our draconian copyright system is the best tool we have to combat that, good. It's absolutely the lesser of the two evils.

[–] lolcatnip@reddthat.com 6 points 10 months ago (1 children)

Do you believe it's reasonable, in general, to develop technology that has the potential to replace some human labor?

Do you believe compensating copyright holders would benefit the individuals whose livelihood is at risk?

the true benefits of AI are overwhelmingly for corporations and investors

"True" is doing a lot of work here, I think. From my perspective the main beneficiaries of technology like LLMs and stable diffusion are people trying to do their work more efficiently, people paying around, and small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc. Maybe you're talking about something different, like deep fakes? The downside of using a vague term like "AI" is that it's too easy to accidently conflate things that have little in common.

[–] EldritchFeminity@lemmy.blahaj.zone 11 points 10 months ago

There's 2 general groups when it comes to AI in my mind: Those whose work would benefit from the increased efficiency AI in various forms can bring, and those who want the rewards of work without putting in the effort of working.

The former include people like artists who could do stuff like creating iterations of concept sketches before choosing one to use for a piece to make that part of their job easier/faster.

Much of the opposition of AI comes from people worrying about/who have been harmed by the latter group. And it all comes down the way that the data sets are sourced.

These are people who want to use the hard work of others for their own benefit, without giving them compensation; and the corporations fall pretty squarely into this group. As does your comment about "small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc." Before AI, they were free to hire an artist to do that for them. MidJourney, for example, falls into this same category - the developers were caught discussing various artists that they "launder through a fine tuned Codex" (their words, not mine, here for source) for prompts. If these sorts of generators were using opt-in data sets, paying licensing fees to the creators, or some other way to get permission to use their work, this tech could have tons of wonderful uses, like for those small-time creators. This is how music works. There are entire businesses that run on licensing copyright free music out to small-time creators for their videos and stuff, but they don't go out recording bands and then splicing their songs up to create synthesizers to sell. They pay musicians to create those songs.

Instead of doing what the guy behind IKEA did when he thought "people besides the rich deserve to be able to have furniture", they're cutting up Bob Ross paintings to sell as part of their collages to people who want to make art without having to actually learn how to make it or pay somebody to turn their idea into reality. Artists already struggle in a world that devalues creativity (I could make an entire rant on that, but the short is that the starving artist stereotype exists for a reason), and the way companies want to use AI like this is to turn the act of creating art into a commodity even more; to further divest the inherently human part of art from it. They don't want to give people more time to create and think and enjoy life; they merely want to wring even more value out of them more efficiently. They want to take the writings of their journalists and use them to train the AI that they're going to replace them with, like a video game journalism company did last fall with all of the writers they had on staff in their subsidiary companies. They think, "why keep 20 writers on staff when we can have a computer churn out articles for our 10 subsidiaries?" Last year, some guy took a screenshot of a piece of art that one of the artists for Genshin Impact was working on while livestreaming, ran it through some form of image generator, and then came back threatening to sue the artist for stealing his work.

Copyright laws don't favor the small guy, but they do help them protect their work as a byproduct of working for corporate interests. In the case of the Genshin artist, the fact that they were livestreaming their work and had undeniable, recorded proof that the work was theirs and not some rando in their stream meant that copyright law would've been on their side if it had actually gone anywhere rather than some asshole just being an asshole. Trademark isn't quite the same, but I always love telling the story of the time my dad got a cease and desist letter from a company in another state for the name of a product his small business made. So he did some research, found out that they didn't have the trademark for it in that state, got the trademark himself, and then sent them back their own letter with the names cut out and pasted in the opposite spots. He never heard from them again!

[–] AnneBonny@lemmy.dbzer0.com 5 points 10 months ago (4 children)

I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

Would you characterize projects like wikipedia or the internet archive as "sucking up all human knowledge"?

[–] MBM@lemmings.world 15 points 10 months ago

Does Wikipedia ever have issues with copyright? If you don't cite your sources or use a copyrighted image, it will get removed

[–] dhork@lemmy.world 14 points 10 months ago (2 children)

In Wikipedia's case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that's not what they are doing.

The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).

[–] phillaholic@lemm.ee 4 points 10 months ago

The Internet Archive has no ground to stand on at all. It would be one thing if they only allowed downloading of orphaned or unavailable works, but that’s not the case.

[–] randon31415@lemmy.world 2 points 10 months ago

Wikipedia has had bots writing articles since the 2000 census information was first published. The 2000 census article writing bot was actually the impetus for Wikipedia to make the WP:bot policies.

[–] assassin_aragorn@lemmy.world 8 points 10 months ago (1 children)

Wikipedia is free to the public. OpenAI is more than welcome to use whatever they want if they become free to the public too.

[–] afraid_of_zombies@lemmy.world -3 points 10 months ago (1 children)

It is free. They have a pair model with more stuff but the baseline model is more than enough for most things.

[–] assassin_aragorn@lemmy.world 3 points 10 months ago (1 children)

There should be no paid model if they aren't going to pay for training material.

[–] afraid_of_zombies@lemmy.world -4 points 10 months ago (1 children)

There also shouldn't be goal post moving in lemmy threads but yet here we are. Can you move the goalposts back into position for me?

[–] assassin_aragorn@lemmy.world 2 points 10 months ago (1 children)

My position has always been that OpenAI can either pay for training materials or make money solely on advertisements. Having a paid version is completely unacceptable if they aren't paying for training.

[–] afraid_of_zombies@lemmy.world -2 points 10 months ago

OpenAI is more than welcome to use whatever they want if they become free to the public too.

My position has always been

Left the goalposts and went on to gaslighting

[–] afraid_of_zombies@lemmy.world -1 points 10 months ago

The copyright shills in this thread would shutdown Wikipedia

[–] lolcatnip@reddthat.com 2 points 10 months ago (3 children)

I don't understand why people are defending AI companies

Because it's not just big companies that are affected; it's the technology itself. People saying you can't train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.

[–] assassin_aragorn@lemmy.world 12 points 10 months ago

This is frankly very simple.

  • If the AI is trained on copyrighted material and doesn't pay for it, then the model should be freely available for everyone to use.

  • If the AI is trained on copyrighted material and pays a license for it, then the company can charge people for using the model.

If information should be free and copyright is stifling, then OpenAI shouldn't be able to charge for access. If information is valuable and should be paid for, then OpenAI should have paid for the training material.

OpenAI is trying to have it both ways. They don't want to pay for information, but they want to charge for information. They can't have one without the either.

[–] BURN@lemmy.world 9 points 10 months ago (1 children)

You can make these models just fine using licensed data. So can any hobbyist.

You just can’t steal other people’s creations to make your models.

[–] lolcatnip@reddthat.com -1 points 10 months ago (1 children)

Of course it sounds bad when you using the word "steal", but I'm far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.

[–] BURN@lemmy.world 8 points 10 months ago (2 children)

Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.

[–] lolcatnip@reddthat.com -2 points 10 months ago

Training is theft imo.

Then it appears we have nothing to discuss.

[–] dhork@lemmy.world 7 points 10 months ago (1 children)

I am not saying you can't train on copyrighted works at all, I am saying you can't train on copyrighted works without permission. There are fair use exemptions for copyright, but training AI shouldn't apply. AI companies will have to acknowledge this and get permission (probably by paying money) before incorporating content into their models. They'll be able to afford it.

[–] lolcatnip@reddthat.com 1 points 10 months ago (1 children)

What if I do it myself? Do I still need to get permission? And if so, why should I?

I don't believe the legality of doing something should depend on who's doing it.

[–] BURN@lemmy.world 3 points 10 months ago

Yes you would need permission. Just because you’re a hobbyist doesn’t mean you’re exempt from needing to follow the rules.

As soon as it goes beyond a completely offline, personal, non-replicatible project, it should be subject to the same copyright laws.

If you purely create a data agnostic AI model and share the code, there’s no problem, as you’re not profiting off of the training data. If you create an AI model that’s available for others to use, then you’d need to have the licensing rights to all of the training data.