this post was submitted on 08 Jan 2024
334 points (96.1% liked)

Technology

59605 readers
3434 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Microsoft, OpenAI sued for copyright infringement by nonfiction book authors in class action claim::The new copyright infringement lawsuit against Microsoft and OpenAI comes a week after The New York Times filed a similar complaint in New York.

you are viewing a single comment's thread
view the rest of the comments
[–] CosmoNova@lemmy.world -2 points 10 months ago (2 children)

I hear those kinds of arguments a lot, though usually from the exact same people who claimed nobody would be convicted of fraud for NFT and crypto scams when those were at their peak. The days of the wild west internet are long over.

Theft in the digital space is a very real thing in the eyes of the law, especially when it comes to copyright infringement. It‘s wild to me how many people seem to think Microsoft will just get a freebie here because they helped pioneering a new technology for personal gain. Copyright holders have a very real case here and I‘d argue even a strong one.

Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.

[–] LWD@lemm.ee 10 points 10 months ago* (last edited 10 months ago) (1 children)
[–] theneverfox@pawb.social 2 points 10 months ago

Personally, I think public info is fair game - consent or not, it's public. They're not sharing the source material, and the goal was never plagiarism. There was a period where it became coherent enough to get very close to plagiarism, but it's been moving past that phase very quickly

Microsoft, especially with how they scraped private GitHub repos (and the things I'm sure Google and Facebook just haven't gotten caught doing with private data) is way over the line for me. But I see that more as being bad stewards of private data - they shouldn't be looking at it, their AI shouldn't be looking at it, the public shouldn't be able to see it, and they probably failed on all counts

Granted, I think copyright is a bullshit system. Normal people don't get any protection, because you need to pay to play. Being unable to defend it means you lose it, and in most situations you're going to spend way more on legal costs than you could possibly get back.

I also think the most important thing is that this tech is spread everywhere, because we can't have one group in charge of the miracle technology... It's too powerful.

Google has all the data they could need, they've bullied the web into submission... They don't have to worry about copyright, they control the largest ad network and dominate search (at least for now).

It sucks that you can take any artist's visual work, and fine tune a network to replicate endless rough facsimile in a few days. I genuinely get how that must feel violating.

But they're going to be screwed when the corporate work dries up for a much cheaper option, and they're going to have to deal with the flood of AI work... Copyright won't help them, it's too late for it to even slow it down

If companies did something wrong, have it out in court. My concern is that they're going to pass laws on this that claim it's for the artists, but effectively gatekeep AI to tech giants

[–] General_Effort@lemmy.world 3 points 10 months ago (1 children)

Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.

Where, for example?

[–] CosmoNova@lemmy.world 1 points 10 months ago (1 children)

The European Union, for example.

[–] General_Effort@lemmy.world 2 points 10 months ago (1 children)

That's not right. It explicitly is legal in the EU.

[–] CosmoNova@lemmy.world 0 points 10 months ago (1 children)

That is not how the EU works. Member states can get together to tarif and sanction behavior, but just because the EU generally allows something doesn't mean all member states have to abide. Different constitutions and all. Besides I'd like to know where exactly any EU resolution explicitly allows corporations to throw any data they have at any technology or LLM's specifically even when nobody ever gave consent to that. Corporations have to be quite specific for how they process your data and broadly saying "machine learning stuff" 10 years ago isn't really water proof.

[–] General_Effort@lemmy.world 1 points 10 months ago

No. EU legislation often has so-called opening clauses that allow member states to tune "EU laws" to their needs but it's not the default behavior.

You seem to have the GDPR in mind. It regulates personal data, meaning data that can be tied to a person. If that is not possible, the GDPR has no objections.