Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I had to get my glasses to re-read this comment.
You know why anubis is in place on so many sites, right? You are literally blaming the victims for the absolute bullshit AI is foisting on us all.
Yes, I manage cloudflare for a massive site that at times gets hit with millions of unique bot visits per hour
So you know that this is the lesser of the two evils? Seems like you're viewing it from client's perspective only.
No one wants to burden clients with Anubis, and Anubis shouldn't exist. We are all (server operators and users) stuck with this solution for now because there is nothing else at the moment that keeps these scrapers at bay.
Even the author of Anubis doesn't like the way it works. We all know it's just more wasted computing for no reason except big tech doesn't give a care about anyone.
My point is, and the author's point is, it's not computation that's keeping the bots away right now. It's the obscurity and challenge itself getting in the way.
I don’t think so. I think he’s blaming the “solution” as being a stop gap at best and painful for end-users at worst. Yes the AI crawlers have caused the issue but I’m not sure this is a great final solution.
As the article discussed, this is essentially “an expensive“ math problem meant to deter AI crawlers but in the end it ain’t really that expensive. It’s more like they put two door handles on a door hoping the bots are too lazy to turn both of them but also severely slowing down all one-handed people. I’m not sure it will ever be feasible to essentially figure out how to have one bot determine if the other end is also a bot without human interaction.
It works because it's a bit of obscurity, not because it's expensive. Once it's a big enough problem to the scrapers, the scrapers will adapt and then the only option is to make it more obscure/different or crank up the difficulty which will slow down genuine users much more