this post was submitted on 26 Jan 2024
430 points (83.1% liked)
Technology
59605 readers
3438 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is this really true? Breaking the law implies contravening some legislation which in the case of simply drawing a copyrighted character, you wouldn't be in most jurisdictions. It's a civil issue in that if some company has the rights to a character and some artist starts selling images of that character then whoever owns the rights might sue that artist for loss of income or unauthorised use of their intellectual property.
Regardless, all human artists have learned from images of characters which are the intellectual property of some company.
If I hired a human as an employee, and asked them to draw me a picture of the joker from some movie, there's no contravention of any law I'm aware of, and the rights holder wouldn't have much of a claim against me.
As a layperson, who hasn't put much thought into this, the outcome of a claim against these image generators is unclear. IMO, it will come down to whether or not a model's abilities are significantly derived from a specific category of works.
For example, if a model learned to draw super heros exclusively from watching marvel movies then that's probably a copyright infringement. OTOH if it learned to draw super heroes from a wide variety of published works then IMO it's much more difficult to make a case that the model is undermining the right's holder's revenue.
Copyright law is incredibly far reaching and only enforced up to a point. This is a bad thing overall.
When you actually learn what companies could do with copyright law, you realise what a mess it is.
In the UK for example you need permission from a composer to rearrange a piece of music for another ensemble. Without that permission it's illegal to write the music down. Even just the melody as a single line.
In the US it's standard practice to first write the arrangement and then ask the composer to licence it. Then you sell it and both collect and pay royalties.
If you want to arrange a piece of music in the UK by a composer with an American publisher, you essentially start by breaking the law.
This all gives massive power to corporations over individual artists. It becomes a legal fight the corporation can always win due to costs.
Corporations get the power of selective enforcement. Whenever they think they will get a profit.
AI is creating an image based on someone else's property. The difference is it's owned by a corporation.
It's not legitimate to claim the creation is solely that of the one giving the instructions. Those instructions are not in themselves creating the work.
The act of creating this work includes building the model, training the model, maintaining the model, and giving it that instruction.
So everyone involved in that process is liable for the results to differing amounts.
Ultimately the most infringing part of the process is the input of the original image in the first place.
So we now get to see if a massive corporation or two can claim an AI can be trained on and output anything publicly available (not just public domain)without infringing copyright. An individual human can't.
I suspect the work of training a model solely on public domain will be complete about the time all these cases get settled in a few years.
Then controls will be put on training data.
Then barriers to entry to AI will get higher.
Then corporations will be able to own intellectual property and AI models.
The other way this can go is AI being allowed to break copyright, which then leads to a precedent that breaks a lot of copyright and the corporations lose a lot of power and control.
The only reason we see this as a fight is because corporations are fighting each other.
If AI needs data and can't simply take it publicly from published works, the value of licensing that data becomes a value boost for the copyright holder.
The New York Times has a lot to gain.
There are explicit exceptions limited to copyright law. Education being one. Academia and research another.
All hinge into infringement the moment it becomes commercial.
AI being educated and trained isn't infringement until someone gains from published works or prevents the copyright holder from gaining from it.
This is why writers are at the forefront. Writing is the first area where AI can successfully undermine the need to read the New York Times directly. Reducing the income from the intellectual property it's been trained on.
This isn't the issue. The copyright infringement is the creation of the model using the copywrite work as training data.
All NYT is doing is demonstrating that the model must have been created using copywrite works, and hence infringement has taken place. They are not stating that the model is committing an infringement itself.
I agree, but it is useful to ask if a human isn't allowed to do something, why is a machine?
By putting them on the same level. A human creating an output vs. an AI creating an output, it shows that an infringement has definitely taken place.
I find it helpful to explain it to people as the AI breaching copyright simply because from that angle the law can logically be applied in both scenarios.
Showing a human a piece of copyright material available to view in public isn't infringement.
Showing a generic AI a piece of copyright material available to view in public isn't infringement.
The infringing act is the production of the copy.
By law a human can decide to do that or not, they are liable.
An AI is a program which in this case is designed to have a tendency to copy and the programmer is responsible for that part. That's not necessarily infringement because the programmer doesn't feed in copyright material.
But the trainer showing an AI known to have a tendency to copy some copyright material isn't much different to someone putting that material on a photocopier.
I get many replies from people who think this isn't infringement because they believe a human is actually allowed to do it. That's the misunderstanding some have. The framing of the machine making copies and breaching copyright helps. Even if ultimately I'm saying the photocopier is breaching copyright to begin with.
Ultimately someone is responsible for this machine, and that machine is breaking copyright. The actions used to make, train, and prompt the machine lead to the outcome.
As the AI is a black box, an AI becomes a copyright infringing photocopier the moment it's fed copyright material. It is in itself an infringing work.
The answer is to train a model solely on public domain work and I'd love to play around with that and see what it produces.
That's called fair use. It's a non-issue.