this post was submitted on 21 Aug 2025
206 points (89.3% liked)

Selfhosted

50716 readers
530 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Some thoughts on how useful Anubis really is. Combined with comments I read elsewhere about scrapers starting to solve the challenges, I'm afraid Anubis will be outdated soon and we need something else.

you are viewing a single comment's thread
view the rest of the comments
[–] rtxn@lemmy.world 6 points 1 day ago* (last edited 1 day ago) (1 children)

It's not client-side because validation happens on the server side. The content won't be displayed until and unless the server receives a valid response, and the challenge is formulated in such a way that calculating a valid answer will always take a long time. It can't be spoofed because the server will know that the answer is bullshit. In my example, the server will know that the prime factors returned by the client are wrong because their product won't be equal to the original semiprime. Delegating to a sub-process won't work either, because what's the parent process supposed to do? Move on to another piece of content that is also protected by Anubis?

The point is to waste the client's time and thus reduce the number of requests the server has to handle, not to prevent scraping altogether.

[–] GuillaumeRossolini@infosec.exchange -2 points 1 day ago (1 children)

@rtxn validation of what?

This is a typical network thing: client asks for resource, server says here’s a challenge, client responds or doesn’t, has the correct response or not, but has the challenge regardless

[–] rtxn@lemmy.world 4 points 1 day ago (1 children)

THEN (and this is the part you don't seem to understand) the client process has to waste time solving the challenge, which is, by the way, orders of magnitudes lighter on the server than serving the actual meaningful content, or cancel the request. If a new request is sent during that time, it will still have to waste time solving the challenge. The scraper will get through eventually, but the challenge delays the response and reduces the load on the server because while the scrapers are busy computing, it doesn't have to serve meaningful content to them.

[–] GuillaumeRossolini@infosec.exchange -5 points 1 day ago (1 children)

@rtxn all right, that’s all you had to say initially, rather than try convincing me that the network client was out of the loop: it isn’t, that’s the whole point of Anubis

[–] rtxn@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

With how much authority you wrote with before, I thought you'd be able to grasp the concept. I'm sorry I assumed better.