Very interesting read indeed. I've a question about it; the article is about defeating malicious crawlers/bots affecting a TOR hidden service, so my question is, how might the author differentiate bot requests from standard client requests on a request-by-request basis? I mean, can I assume that many kinds of requests arrive at hidden service through shared/common relays? Would this mean other fingerprinting methods (user agent etc) would be important, and if so, what options remain for the author if the attackers dynamically change/randomise their fingerprint on a per-request basis?
And the follow up: http://www.hackerfactor.com/blog/index.php?/archives/763-The...