Welcome to WebmasterWorld Guest from 188.8.131.52
I was just yesterday reading an interesting discussion highly relevant to current Google changes dating from 2001 and found through a favourite search engine...
Those who cannot learn from history are doomed to repeat it.
Is that served to everyone? Or just to unrecognised robots / IPs?
If it's to everyone, then I think that is drastic but possibly inspired action.
If it's just to anyone unauthorised, then that would be in line with what has always been done with meta tags, etc, would it not?
Seeing what effect it will have on unauthorized bots. We spend 5-8hrs a week here fighting them. It is the biggest problem we have ever faced.
We have pushed the limits of page delivery, banning, ip based, agent based, to avoid the rogue bots - but it is becoming an increasingly difficult problem to control.
also - everyone will have to login to access the site starting now.
a solution is being tested and worked on. It will probably take atleast 60 days for the old pages to be purged from the engines.
Does robots.txt not get parsed in sequence - i.e. you do the inverse of what you did previously - allow the crawlers you specifically want at the top of the file, and the last line in the file is ban everything?
That is part of it and part of the testing. I have found that the majority DO obey robots. However, most of them use weird agent names or browser agent names. The majority certianly do not support cookies.
Agreed tiger, but 12million page views last week while we were away at the conference by rogue bots caused change in that time line.
I can certainly agree that you have a major problem on your hands. However I don't quite understand the logic of banning all bots.
I would suggest that you redirect all requests for robots.txt to a script called robots.php or whatever and then look-up the IP in a list of known IPs for googlebot and then feed googlebot your usual robots.txt and anybody else the ban all bots robots.txt.
You could also log all requests for robots.txt to a DB or log file and go back and analyze which bots are following the disallow directives.
I'm doing something similar on my site and it's working pretty well.
Maybe you are already doing some thing along those lines since you mentioned "cloaking"
So, we start by banning bots, and then follow immediatly with required cookies/logins for everyone. That will stop most of the bots. The ones it don't, we will follow up with session id's, and auto ban in htaccess for page view abuse. Lastly, we will move to captcha logins, and then random login challenges with other captcha gfx requirements.
> Why in "Foo"?
I hate talking about it at all. It is like talking about security problems in public (given I believe that the majority of bots we see here are owned by members). However, it is better brought up by us, than someone else.
but based on known ip ranges of good bots?
The idea is that a legitimate user will not request more than X number of pages in a specified amount of time. So you limit access to the ones that go over the limit.
You should see the appropriate X from your stats, and make exception for the known major bot networks.
If you changed the robots.txt file to the following syntax (in the order shown below), wouldn't that allow only Googlebot in and keep all the other good bots out? Rogue bots will completely ignore the robots.txt anyway, but at least the site search would still work: