Yeah, well-written stuff. I think Anubis will come and go. This beautifully demonstrates and, best of all, quantifies the ~~negligence~~ negligible cost to scrapers of Anubis.
It's very interesting to try to think of what would work, even conceptually. Some sort of purely client-side captcha type of thing perhaps. I keep thinking about it in half-assed ways for minutes at a time.
Maybe something that scrambles the characters of the site according to some random "offset" of some sort, e.g maybe randomly selecting a modulus size and an offset to cycle them, or even just a good ol' cipher. And the "captcha" consists of a slider that adjusts the offset. You as the viewer know it's solved when the text becomes something sensical - so there's no need for the client code to store a readable key that could be used to auto-undo the scrambling. You could maybe even have some values of the slider randomly chosen to produce English text if the scrapers got smart enough to check for legibility (not sure how to hide which slider positions would be these red herring ones though) - which could maybe be enough to trick the scraper into picking up junk text sometimes.