this post was submitted on 21 Aug 2025
185 points (89.0% liked)

Selfhosted

50688 readers
533 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Some thoughts on how useful Anubis really is. Combined with comments I read elsewhere about scrapers starting to solve the challenges, I'm afraid Anubis will be outdated soon and we need something else.

top 50 comments
sorted by: hot top controversial new old
[–] Klear@quokk.au 6 points 8 hours ago* (last edited 8 hours ago)

If that sounds familiar, it’s because it’s similar to how bitcoin mining works. Anubis is not literally mining cryptocurrency, but it is similar in concept to other projects that do exactly that

Did the author only now discover cryptography? It's like a cryptocurrency, just without currency, what a concept!

[–] Dremor@lemmy.world 9 points 11 hours ago (1 children)

Anubis is no challenge like a captcha. Anubis is a ressource waster, forcing crawler to resolve a crypto challenge (basically like mining bitcoin) before being allowed in. That how it defends so well against bots, as they do not want to waste their resources on needless computing, they just cancel the page loading before it even happen, and go crawl elsewhere.

[–] tofu@lemmy.nocturnal.garden 6 points 11 hours ago (1 children)

No, it works because the scraper bots don't have it implemented yet. Of course the companies would rather not spend additional compute resources, but their pockets are deep and some already adapted and solve the challenges.

[–] Dremor@lemmy.world 9 points 10 hours ago (2 children)

To solve it or not do not change that they have to use more resources for crawling, which is the objective here. And by contrast, the website sees a lot less load compared to before the use of Anubis. In any case, I see it as a win.

But despite that, it has its detractors, like any solution that becomes popular.

But let's be honest, what are the arguments against it?
It takes a bit longer to access for the first time? Sure, but that's not like you have to click anything or write anything.
It executes foreign code on your machine? Literally 90% of the web does these days. Just disable JavaScript to see how many website is still functional. I'd be surprised if even a handful does.

The only people having any advantages at not having Anubis are web crawler, be it ai bots, indexing bots, or script kiddies trying to find a vulnerable target.

[–] daniskarma@lemmy.dbzer0.com 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

I'm against it for several reasons. Running unauthorized heavy duty code on your end. It's not JS in order to make your site functional, it's heavy calculations unprompted. If they would add simple button "click to run challenge" would at least be more polite and less "malware-like".

For some old devices the challenge last over 30 seconds, I can type a captcha in less time than that.

It blocks behind the necessity to use a browser several webs that people (like the article author) tend to browse directly from a terminal.

It's a delusion. As shown by the article author solving the PoW challenge is not that much of an added cost. Span reduction would be the same with any other novel method, crawlers are just not prepared for it. Any prepared crawler would have no issues whatsoever. People are seeing results just because it's obscurity, not because it really works as advertised. And in fact I believe some sites are starting to get crawled aggressively despite anubis as some crawlers are already catching up with this new Anubis trend.

Take into account that the challenge needs to be light enough so a good user can enter the website in a few seconds running the challenge on a browser engine (very inefficient). A crawler interested in your site could easily put up a solution to mine the PoW using CUDA in a GPU which would be hundreds if not thousands of times more efficient. So the balance of difficulty (still browsable for users but costly to crawl) is not feasible.

It's not universally applicable. Imagine if all internet were behind PoW challenges. It would be like constant Bitcoin mining, a total waste of resources.

The company behind Anubis seems more shady to me each day. They feed on anti-AI paranoia, they didn't even answer the article author valid critics when he email them, they use clearly PR language aimed to convince and please certain demographics to place their product. They are full of slogans but lack substance. I just don't trust them.

[–] Dremor@lemmy.world 2 points 8 hours ago* (last edited 8 hours ago)

Fair point. I do agree with the "clic to execute challenge" approach.

For the terminal browser, it has more to do with it not respecting web standard than Anubis not working on it.

As for old hardware, I do agree that a temporization could be good idea, if it wasn't so easy to circumvent. In such case bots would just wait in the background and resume once the timer is fullified, which would vastly decrease Anubis effectiveness as they don't uses much power to do so. There isn't really much that can be done here.

As for the CUDA solution, that will depend on the implemented hash algorithm. Some of them (like the one used by Monero) are made to vastly more inefficient on GPU than it is on the CPU. Moreover, GPU servers are far more expensive to run than CPU ones, so the result would be the same : crawling would be more expensive.

In any case, the best solution would be by far to make it a legal requirement to respect robot.txt, but for now the legislators prefer to look the other way.

[–] tofu@lemmy.nocturnal.garden 1 points 10 hours ago

Sure, I'm not arguing against Anubis! I just don't think the added compute cost is sufficient to keep them out once they adjust.

[–] rtxn@lemmy.world 29 points 17 hours ago (1 children)

New developments: just a few hours before I post this comment, The Register posted an article about AI crawler traffic. https://www.theregister.com/2025/08/21/ai_crawler_traffic/

Anubis' developer was interviewed and they posted the responses on their website: https://xeiaso.net/notes/2025/el-reg-responses/

In particular:

Fastly's claims that 80% of bot traffic is now AI crawlers

In some cases for open source projects, we've seen upwards of 95% of traffic being AI crawlers. For one, deploying Anubis almost instantly caused server load to crater by so much that it made them think they accidentally took their site offline. One of my customers had their power bills drop by a significant fraction after deploying Anubis. It's nuts.

So, yeah. If we believe Xe, OOP's article is complete hogwash.

[–] tofu@lemmy.nocturnal.garden 9 points 14 hours ago

Cool article, thanks for linking! Not sure about that being a new development though, it's just results, but we already knew it's working. The question is, what's going to work once the scrapers adapt?

[–] possiblylinux127@lemmy.zip 11 points 15 hours ago* (last edited 13 hours ago) (2 children)

Anubis sucks

However, the number of viable options is limited.

[–] seralth@lemmy.world 12 points 13 hours ago

Yeah but at least Anubis is cute.

I'll take sucks but cute over dead internet and endless swarmings of zergling crawlers.

[–] CommanderCloon@lemmy.ml 1 points 8 hours ago (1 children)
[–] possiblylinux127@lemmy.zip 1 points 3 hours ago

The implementation

It runs JavaScript and the actual algorithm could use improvement.

[–] cupcakezealot@piefed.blahaj.zone 2 points 10 hours ago

because anime catgirls are the best

[–] CrackedLinuxISO@lemmy.dbzer0.com 10 points 21 hours ago* (last edited 21 hours ago) (1 children)

There are some sites where Anubis won't let me through. Like, I just get immediately bounced.

So RIP dwarf fortress forums. I liked you.

[–] sem@lemmy.blahaj.zone 10 points 19 hours ago (1 children)

I don't get it, I thought it allows all browser with JavaScript enabled.

[–] SL3wvmnas@discuss.tchncs.de 4 points 12 hours ago

I, too get blocked by certain sites. I think it's a configuration thing, where it does not like my combination of uBlock/NoScript, even when I explicitly allow their scripts....

[–] unexposedhazard@discuss.tchncs.de 63 points 1 day ago (1 children)

This… makes no sense to me. Almost by definition, an AI vendor will have a datacenter full of compute capacity.

Well it doesnt fucking matter what "makes sense to you" because it is working...
Its being deployed by people who had their sites DDoS'd to shit by crawlers and they are very happy with the results so what even is the point of trying to argue here?

[–] daniskarma@lemmy.dbzer0.com 9 points 14 hours ago* (last edited 14 hours ago)

It's working because it's not very used. It's sort of a "pirate seagull" theory. As long a few people use it it works. Because scrappers don't really count on Anubis so they don't implement systems to surpass it.

If it were to become more common it would be really easy to implement systems that would defeat the purpose.

As of right now sites are ok because scrappers just send https requests and expect a full response. If someone wants to bypass Anubis protection they would need to take into account that they will receive a cryptographic challenge and have to solve it.

The thing is that cryptographic challenges can be very optimized. They are designed to run in a very inefficient environment as it is a browser. But if someone would take the challenge and solve it in a better environment using CUDA or something like that it would take a fraction of the energy defeating the purpose of "being so costly that it's not worth scrapping".

At this point it's only a matter of time that we start seeing scrappers like that. Specially if more and more sites start using Anubis.

[–] rtxn@lemmy.world 188 points 1 day ago* (last edited 1 day ago) (4 children)

The current version of Anubis was made as a quick "good enough" solution to an emergency. The article is very enthusiastic about explaining why it shouldn't work, but completely glosses over the fact that it has worked, at least to an extent where deploying it and maybe inconveniencing some users is preferable to having the entire web server choked out by a flood of indiscriminate scraper requests.

The purpose is to reduce the flood to a manageable level, not to block every single scraper request.

[–] 0_o7@lemmy.dbzer0.com 16 points 15 hours ago

The article is very enthusiastic about explaining why it shouldn't work, but completely glosses over the fact that it has worked

This post was originally written for ycombinator "Hacker" News which is vehemently against people hacking things together for greater good, and more importantly for free.

It's more of a corporate PR release site and if you aren't known by the "community", calling out solutions they can't profit off of brings all the tech-bros to the yard for engagement.

[–] loudwhisper@infosec.pub 3 points 13 hours ago

Exactly my thoughts too. Lots of theory about why it won't work, but not looking at the fact that if people use it, maybe it does work, and when it won't work, they will stop using it.

[–] poVoq@slrpnk.net 89 points 1 day ago* (last edited 1 day ago) (34 children)

And it was/is for sure the lesser evil compared to what most others did: put the site behind Cloudflare.

I feel people that complain about Anubis have never had their server overheat and shut down on an almost daily basis because of AI scrapers 🤦

load more comments (34 replies)
[–] AnUnusualRelic@lemmy.world 19 points 1 day ago (1 children)

The problem is that the purpose of Anubis was to make crawling more computationally expensive and that crawlers are apparently increasingly prepared to accept that additional cost. One option would be to pile some required cycles on top of what's currently asked, but it's a balancing act before it starts to really be an annoyance for the meat popsicle users.

[–] rtxn@lemmy.world 23 points 1 day ago

That's why the developer is working on a better detection mechanism. https://xeiaso.net/blog/2025/avoiding-becoming-peg-dependency/

[–] VitabytesDev@feddit.nl 6 points 22 hours ago

I love that domain name.

I'm constantly unable to access Anubis sites on my primary mobile browser and have to switch over to Fennec.

load more comments
view more: next ›