this post was submitted on 02 Mar 2026
274 points (97.9% liked)
Technology
82131 readers
3984 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Fair enough, I see what you're saying.
I'll go ahead and share the quote from the court's decision for context:
I'm a little bit uncertain based on this summary of the judgement by the Stanford library on copyright and fair use:
Why are they saying that "the work was never eligible for copyright in the first place"? Because Thaler claimed that the AI itself made the work? This all feels a bit like Schroedinger's Copyrighted Work to me... the work exists, so who made it?
Generative AI fans would have you believe that they are the author and copyright holder, because they wrote a prompt.
AI companies might want to argue, like Thaler, that they made the AI, so they are the author and copyright holder.
My personal opinion is that the prompt and code are both relatively insignificant in comparison to the training data from which the probabilistic machine learning model is derived. The prompt would do nothing without the model, and OpenAI themselves said they quiet part out loud when they argued in court that the creation of a model such as theirs would be "impossible" to achieve without training off of vast amounts of copyrighted works.
Clearly the training data itself is the most important piece of the system, which makes a lot of sense to those of us who understand how machine learning and "AI" training actually works on a technical level. They've admitted in plain English that their entire product and for-profit business model relies on the use of other people's work as training data. Sounds to me like they have derived considerable value from other people's work without any sort of license or compensation....
By that logic alone, I would argue that the real copyright holders of generative AI works ought to be, at least in part, the people who provided (wittingly or unwittingly) the training data. They are the ones who made this whole social experiment possible, after all. Data is the new code, so I'm not sure why people expect to be able to use it for free in an unrestricted way.
It's simply not the court's job to determine this, in this particular case. Which is why it's so frustrating that this particular case keeps ending up under headlines claiming that it's established that "AI generated art can't be copyrighted."
All the rest of this argument is out of scope of this case, you'd need to look to other cases. You can argue and opine however you like about what you think the outcomes should be but that doesn't change what the outcomes of those cases actually end up being.