- Search Engines
-- Search Engine Spider and User Agent Identification
- 11:18 pm on Dec 4, 2012
No probs, I agree. That's why there are simple ways to block crawlers from crawling - tell them where you don't allow them to go. That's how you block google, right?
FWIW, you have your terminology confused.
A request within robots.txt is exactly that for bots that are compliant.
"Blocking access" (aka denial of access), whether a bot, or any other type of visitor, is a server action, of which the visitor has no choice.
dstiles offered the following, which you apparently overlooked.
many of us block all but the basic googlebot
What's all this pretension comparing yourself to Google?
Why not just deny the 176 Class A, and be down with it ;)
Brought to you by