this post was submitted on 08 Jan 2024
334 points (96.1% liked)
Technology
59605 readers
3366 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I disagree and I feel like you're equally misrepresenting the issue if I must be as well. LLMs can do far more than simply write stories. They can write stories, but that is just one capability among numerous. Can it write stories in the style of GRRM? I suppose, but honestly doesn't GRRM also borrow a lot of inspiration from other authors? Any writer claiming to be so unique that they aren't borrowing from other writers is full of shit.
I'm not a lawyer or legal expert, I'm just giving a layman's opinion on a topic. I hope Sam Altman and his merry band get nailed to the wall, I really do. It's going to be a clusterfuck of endless legal battles for the foreseeable future, especially now that OpenAI isn't even pretending to be nonprofit anymore.
This story is about a non-fiction work.
What is the purpose of a non-fiction work? It's to give the reader further knowledge on a subject.
Why does an LLM manufacturer train their model on a non-fiction work? To be able to act as a substitute source of the knowledge.
End result is that
So, not only have they stolen their work, they've stolen their income and reputation.
If you're using an LLM as any form of authoritative source-and literally any LLM specifically warns NOT to do that--then you're going to have a bad time. No one is using them to learn in any serious capacity. Ideally, the AI should absolutely be citing its sources, and if someone is able to figure out how to do that reliably, they'll be made quite rich, I'd imagine. In my opinion, the fiction writers have a stronger case than non-fiction (I believe the fiction writers' class action against OpenAI in September is still ongoing).
For someone who claimed to not be a fan of OpenAI, you sure do know all the fan arguments against regulation for AI.
I'm not here to argue the finer points, and in general I simply try to aim for the practical actions that lead to better circumstances. I agree with many of your points.
This lawsuit won't fix anything but it will slow down the progress of OpenAI and their ability to loot culture and content for all it's value. I see it as a foot in the door for less economically capable artists and such.
Lawsuits are not isolated incidents. The outcome of this will have far reaching impacts on the future of how people's work is treated in regards to AI and training data.