this post was submitted on 23 Feb 2024
55 points (88.7% liked)

Fediverse

28490 readers
388 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI's crawler via robot.txt, what does that even mean when people on other servers that don't block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

you are viewing a single comment's thread
view the rest of the comments
[–] mozz@mbin.grits.dev 16 points 9 months ago (12 children)

You are correct. Some of the largest instances block bot traffic, but most don't, meaning your posts have been seen by AI crawlers and will continue to be so.

Short of not participating in federation and only discussing things within a private non-federated community on a personal instance or something, I don't think there's a way to prevent it.

[–] cecep@fedia.io 5 points 9 months ago (10 children)

Thanks for confirming. It's unfortunate that people who are outraged about Reddit selling their data to AI companies don't really have an alternative in the fediverse.

I guess the best hope is for new mechanisms to control AI crawlers to emerge, so they can be blocked per user rather than per domain. Maybe https://spawning.ai will come up with something. One can hope.

[–] FaceDeer@kbin.social 12 points 9 months ago

I really don't see how it would be physically possible to do that and still allow the content to be publicly seen by other humans.

load more comments (9 replies)
load more comments (10 replies)