I saw today that Googlebot got caught in a spider trap that it shouldn't have as that dir is blocked via robots.txt
I know of at least one other person recently who this has also happened to.
Why is GB ignoring robots?
1:41 am on Sep 6, 2011 (gmt 0)
Serving the right error codes for both planned and unplanned outages is something that few sites get completely right.
OK, now I'm trying to wrap my brain around the idea of having control over what gets served up during an unplanned, uhm, anything. Is there a definitive thread that explains it? "Error code" doesn't seem to be a fruitful search string ;) (16,600 hits-- constrained to this site-- goes beyond "fruitful" into "rotting on the ground". Squelch.)
7:18 am on Sep 6, 2011 (gmt 0)
Serving a "site temporarily offline for updating" message with "200 OK" with or without 301 redirecting all site URLs to an error page, is a big bad idea.
DNS failure, server meltdown, etc will just timeout and return no website. Serving "can't connect to database" with "200 OK" is asking for trouble; serving 503 is much better. No idea if there is a definitive list.
5:37 pm on Sep 6, 2011 (gmt 0)
@draftzero, that seems to imply that the page is not crawled for search purposes, which is not what the conversation above assumes. If that is really what they are doing, there is no problem.
@g1smd, part of the problem is that some CMSs get it wrong. I think Wordpress used to but it was fixed.
@lucy, on another thread you said your site was entirely static HTML, so you have nothing to worry about: I have never come across a web server getting it wrong, its badly written CMS's and scripts.
11:12 pm on Sep 7, 2011 (gmt 0)
My code just caught 5 Google IPs. Request headers are;