this post was submitted on 17 Apr 2025
92 points (96.9% liked)

Technology

69098 readers
4321 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] ShellMonkey@lemmy.socdojo.com 75 points 4 days ago (1 children)

You can download a torrent of the whole thing, they don't need to give it to anyone.

https://en.m.wikipedia.org/wiki/Wikipedia:Database_download

[–] eager_eagle@lemmy.world 22 points 4 days ago (1 children)

This release is powered by our Snapshot API’s Structured Contents beta, which outputs Wikimedia project data in a developer-friendly, machine-readable format. Instead of scraping or parsing raw article text, Kaggle users can work directly with well-structured JSON representations of Wikipedia content—making this ideal for training models, building features, and testing NLP pipelines.

[–] r00ty@kbin.life 19 points 4 days ago

The problem is, this assumes that even if the kind of AI creators that are scraping relentlessly (and there's a fair few that do) took this data source directly, that they'd then put an exception in their scrapers to avoid wikipedia's site. I doubt they would bother.

[–] MCasq_qsaCJ_234@lemmy.zip 9 points 4 days ago

I just feel like OpenAI might accept this and ignore the website, although it's very unlikely they will actually do that.

[–] Geodad@lemm.ee 4 points 4 days ago (3 children)

Is there not some way to just blacklist the AI domain or IP range?

[–] Monument@lemmy.sdf.org 13 points 4 days ago

No, because there isn’t a single IP range or user agent, and many developers are going to lengths to defeat anti-scraping measures, which include user agent spoofing as well as vpns and the like to mask the source of the traffic.

[–] devfuuu@lemmy.world 10 points 4 days ago* (last edited 4 days ago) (1 children)

If you read the few artucles about people being attacked by AI in the recent months they all tell the same story: it's not possible. The AI companies are targetting on purpose other sites and working non stop to actively avoid any kind of blocking that could be active. They rotate IPs regularly, they change User agents, they ignore robots.txt, deduplicate requests over bunch of ips, if they detect they are being blocked they start only doing one request in each ip, they change user agents the moment they detect one is being blocked, etc etc etc.

[–] baines@lemmy.cafe 2 points 4 days ago (1 children)

whitelists and the end of anonymity

[–] HK65@sopuli.xyz 3 points 3 days ago

Or just decent regulation. You're offering an AI product? You can't attest that it's been trained in a legitimate way?

Into the shadow realm with you.

Nope, there's no specific range of IPs that AI scrapers use.