| 5:40 pm on Nov 28, 2004 (gmt 0)|
don't assume that the robots.txt you see is the same one served to spiders... that is not always the case.
As for blocking msn, it has been known to spider excessively for no obvious benefit to the webmaster (while consuming the webmster's bandwidth allocation) so some sites block it.
| 6:06 pm on Nov 28, 2004 (gmt 0)|
<<don't assume that the robots.txt you see is the same one served to spiders... that is not always the case.>>
By "the one that you see" did you mean "the one that your browser might get"? Because browsers don't make requests for Robots.txt at all.
What type of server gives up different versions of the file for different requests/user-agents/spiders?
| 7:28 pm on Nov 28, 2004 (gmt 0)|
What I want to know is how much good, if any, is there in blocking ‘bad’ spiders, like some of the ones listed in the robots.txt mentioned above, when the scummy people using such bots can just change the user agent?
Should all of them be listed in the robots.txt file, or is it a moot point?
| 9:25 pm on Nov 28, 2004 (gmt 0)|
> What type of server gives up different versions of the file for different requests/user-agents/spiders?
Mine do. It's one way to cut bandwidth consumed by robots that don't understand multiple-user-agent records. Detect those UAs and serve them a simplified robots.txt with their UA string inserted. A combination of mod_rewrite and some simple cgi scripting on Apache can be used to do this easily.
Some "bad" robots are in fact spoofs of legitimate user-agents. In cases where the legitimate robot visits but is considered to be of no practical use to the site owner, it may be Disallowed in robots.txt. It is in fact necessary to take stronger measures for the spoofers, but having the robots.txt disallow helps identify the spoofers (because they don't fetch robots.txt, or they ignore the contents of robots.txt even though they do fetch it. So no, it's not entirely a waste of time.
| 6:11 pm on Nov 30, 2004 (gmt 0)|
check out this
no cache, I wonder how google is getting to all the pages of webmasterworld.
no page of webmasterworld has a cache but its getting indexed may be links from outside.