this post was submitted on 04 Jul 2025
110 points (95.1% liked)

Selfhosted

49438 readers
806 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

everytime i check nginx logs its more scrapers then i can count and i could not find any good open source solutions

you are viewing a single comment's thread
view the rest of the comments
[–] Fedditor385@lemmy.world 5 points 1 week ago (9 children)

I just realized an interesting thing - if I use Gemini, and tell it to do deep research, it actually goes to the websites it knows/finds, and looks up the content to provide up-to-date answers. So, some of those AI crawlers are actually not crawlers, but actual users who just use AI instead of coming directly to the site.

Soo... blocking AI completely could also potentially reduce exposure, especially as more and more people use AI to basically do searches instead of browsing themselves. That would also explain the amount of requests daily - could be simply different users using AI to research for some topic.

Point is, you should evaluate if the AI requests are just proxies of real users, and blocking AI blocks real users from knowing your site exists.

[–] rumba@lemmy.zip 5 points 1 week ago (1 children)

Porque no los dos?

There is no functional difference between them scraping you systematically and them coming to you on behalf of user. They're coming to scrape you either way, being asked by someone is just going to make them do it in a smarter fashion.

Also, if you're not using Gemini, damned if Google.com doesn't search you with it anyway. They want these AIs trained bad, sooner or later almost all searching will be done through AI. There will eventually be no option.

You are correct that blocking all AI calls well eventually make your search results not work.

So if you want organic traffic, you have to allow ai scraping eventually. You're just going to get diminishing returns until a point.

[–] jjlinux@lemmy.ml 3 points 1 week ago

Eso es correctísimo. I don't want ANY AI in my servers looking for anything, regardless of if they are crawlers or if it's on behalf of some lazy fuck.

load more comments (7 replies)