this post was submitted on 23 Feb 2024
55 points (88.7% liked)

Fediverse

28490 readers
602 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI's crawler via robot.txt, what does that even mean when people on other servers that don't block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

you are viewing a single comment's thread
view the rest of the comments
[–] CameronDev@programming.dev 37 points 9 months ago (1 children)

But robots.txt is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved

You can ask nicely, they can (and will) ignore it.

[–] sukhmel@programming.dev 13 points 9 months ago

Also, I've already seen complaints about AI companies scraping everything ignoring robots.txt

And we would block the obedient and useful crawlers while doing no harm to malicious