this post was submitted on 11 Jan 2024
289 points (96.8% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

At a Senate hearing on AI’s impact on journalism, lawmakers backed media industry calls to make OpenAI and other tech companies pay to license news articles and other data used to train algorithms.

you are viewing a single comment's thread
view the rest of the comments
[–] Grimy@lemmy.world 55 points 10 months ago* (last edited 10 months ago) (15 children)

“What would that even look like?” asks Sarah Kreps, who directs the Tech Policy Institute at Cornell University. “Requiring licensing data will be impractical, favor the big firms like OpenAI and Microsoft that have the resources to pay for these licenses, and create enormous costs for startup AI firms that could diversify the marketplace and guard against hegemonic domination and potential antitrust behavior of the big firms.”

As our economy becomes more and more driven by AI, legislation like this will guarantee Microsoft and Google get to own it.

[–] Motavader@lemmy.world 28 points 10 months ago* (last edited 10 months ago) (2 children)

Yes, and they'll use legislation to pull up the ladder behind them. It's a form of Regulatory Capture, and it will absolutely lock out small players.

But there are open source AI training datasets, but the question is whether LLMs can be trained as accurately with them.

[–] Mechanize@feddit.it 8 points 10 months ago (2 children)

Any foundation model is trained on a subset of common crawl.

All the data in there is, arguably, copyrighted by one individual or another. There is no equivalent open - or closed - source dataset to it.

Each single post, page, blog, site, has a copyright holder. In the last year big companies have started to change their TOS to make that they are able to use, relicense and generally sell your data hosted in their services as their own for the intent of AI training, so potentially some small parts of common crawl will be licensable in bulk - or directly obtained from the source.

This does still leave out the majority of the data directly or indirectly used today, even if you were willing to pay, because it is unfeasable to search and contract every single rights holder.

On the other side of it there have been work to use less but more heavily curated data, which could potentially generate good small, domain specific, models. But still they will not be like the ones we currently have, and the open source community will not be able to have access to the same amount and quality of data.

It's an interesting problem that I'm personally really interested to see where it leads.

[–] Motavader@lemmy.world 3 points 10 months ago (1 children)

Thanks for the link to Common Crawl; I didn't know about that project but it looks interesting.

That's also an interesting point about heavily curated data sets. Would something like that be able to overcome some of the bias in current models? For example, if you were training a facial recognition model, access a curated, open source dataset that has representative samples of all races and genders to try and reduce the racial bias. Anyone training a facial recognition model for any purpose could have a training set that can be peer reviewed for accuracy.

[–] General_Effort@lemmy.world 3 points 10 months ago

Face recognition is probably dead as an open endeavor. The surveillance aspect makes it too controversial. I mean that not only will we not see open source work on this, but any work is behind closed doors.

In general, a major problem is that it is often not clear what reducing bias means. With face recognition, it is clear that we just want it to work for everyone. With genAI it is unclear. EG you type "US president" into an image generator. The historical fact is that all US presidents were male, and all but one were white. What's the unbiased output?

One answer is that it should reflect who is eligible for the US presidency. But in the future, one would expect far more people to be of "mixed race". So would that perhaps be biased against "interracial marriage"? In either case, one could accuse the makers of covering up historical injustice. I think in practice, people want image generators that just give them what they want with minimum fuss; wants which are probably biased by social expectations.

In any case, such curated datasets are used to fine-tune models trained on uncurated data. I don't think that is known how such a dataset should look like exactly, to yield an unbiased model (however defined).

[–] wikibot@lemmy.world 3 points 10 months ago

Here's the summary for the wikipedia article you mentioned in your comment:

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public. Common Crawl's web archive consists of petabytes of data collected since 2008. It completes crawls generally every month.Common Crawl was founded by Gil Elbaz. Advisors to the non-profit include Peter Norvig and Joi Ito. The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available. The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions.As of March 2023, in the most recent version of the Common Crawl dataset, 46% of documents had English as their primary language (followed by German, Russian, Japanese, French, Spanish and Chinese, all below 6%).

^article^ ^|^ ^about^

[–] General_Effort@lemmy.world 2 points 10 months ago (1 children)

These open datasets are used to fine-tune LLMs for specific tasks. But first, LLMS have to learn the basics by being trained on vast amounts of text. At present, there is no chance to do that with open source.

If fair use is cut down, you can forget about it. It would arguably be unconstitutional, though.

That's not even considering the dystopian wishes to expand copyright even further. Some people demand that the model owner should also own the output. Well, some of these open datasets are made with LLMs like ChatGPT.

[–] wewbull@iusearchlinux.fyi 0 points 10 months ago (1 children)

If fair use is cut down...

It's not a case of cutting down fair use. It's a case 9f enforcing current fair use limits.

[–] General_Effort@lemmy.world 1 points 10 months ago

Can you give an example of something that is outside fair use?

Just in case, there is confusion here: Obviously there is no past precedent on exactly the new circumstances, but that does not put new technologies outside the law. EG the freedom of speech and the press apply to the internet, even though there is no printing press involved.

load more comments (12 replies)