this post was submitted on 28 Dec 2023
328 points (97.4% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The New York Times sues OpenAI and Microsoft for copyright infringement::The New York Times has sued OpenAI and Microsoft for copyright infringement, alleging that the companies’ artificial intelligence technology illegally copied millions of Times articles to train ChatGPT and other services to provide people with information – technology that now competes with the Times.

you are viewing a single comment's thread
view the rest of the comments
[–] phoneymouse@lemmy.world 56 points 11 months ago (5 children)

There is something wrong when search and AI companies extract all of the value produced by journalism for themselves. Sites like Reddit and Lemmy also have this issue. I’m not sure what the solution is. I don’t like the idea of a web full of paywalls, but I also don’t like the idea of all the profit going to the ones who didn’t create the product.

[–] kromem@lemmy.world 28 points 11 months ago* (last edited 11 months ago) (1 children)

What's the value of old journalism?

It's a product where the value curve is heavily weighted towards recency.

In theory, the greatest value theft is when the AP writes a piece and two dozen other 'journalists' copy the thing changing the text just enough not to get sued. Which is completely legal, but what effectively killed investigative journalism.

A LLM taking years old articles and predicting them until it can effectively learn relationships between language itself and events described in those articles isn't some inherent value theft.

It's not the training that's the problem, it's the application of the models that needs policing.

Like if someone took a LLM, fed it recently published news stories in the prompts with RAG, and had it rewrite them just differently enough that no one needed to visit the original publisher.

Even if we have it legal for humans to do that (which really we might want to revisit, or at least create a special industry specific restriction regarding), maybe we should have different rules for the models.

But to try to claim a LLM that's allowing coma patients to communicate or to problem solve self-driving algorithms or to diagnose medical issues is stealing the value of old NYT articles in its doing so is not really an argument I see much value in.

[–] ChucklesMacLeroy@lemmy.world 2 points 10 months ago

Really gave me a whole new perspective. Thanks for that.

[–] Kecessa@sh.itjust.works 14 points 11 months ago* (last edited 11 months ago)

The solution is imposing to these companies the responsibility of tracking their profit per media, tax them and redistribute that money based on the tracking info. They're able to track all the pages you visit, it's complete bullshit when they say they don't know how much they make for each places their ads are displayed.

[–] AllonzeeLV@lemmy.world 11 points 10 months ago (1 children)

but I also don’t like the idea of all the profit going to the ones who didn’t create the product.

Should... should we tell him?

[–] kilgore_trout@feddit.it 10 points 10 months ago

Tell them instead of mocking them.

Yes, "that's how the world works". But doesn't mean we should stop trying to change it.

[–] DogWater@lemmy.world 2 points 10 months ago

Ai isn't creating the product. It consumed it.