Welcome to WebmasterWorld Guest from 54.156.60.28

Forum Moderators: Robert Charlton & andy langton & goodroi

Message Too Old, No Replies

Googlebot getting caught in robots.txt spider trap

     
11:14 am on Aug 1, 2011 (gmt 0)

New User

10+ Year Member

joined:May 26, 2005
posts: 35
votes: 0


Hi,

I saw today that Googlebot got caught in a spider trap that it shouldn't have as that dir is blocked via robots.txt

I know of at least one other person recently who this has also happened to.

Why is GB ignoring robots?
1:41 am on Sept 6, 2011 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member lucy24 is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 9, 2011
posts:13841
votes: 485


Serving the right error codes for both planned and unplanned outages is something that few sites get completely right.

OK, now I'm trying to wrap my brain around the idea of having control over what gets served up during an unplanned, uhm, anything. Is there a definitive thread that explains it? "Error code" doesn't seem to be a fruitful search string ;) (16,600 hits-- constrained to this site-- goes beyond "fruitful" into "rotting on the ground". Squelch.)
7:18 am on Sept 6, 2011 (gmt 0)

Senior Member

WebmasterWorld Senior Member g1smd is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:July 3, 2002
posts:18903
votes: 0


Serving a "site temporarily offline for updating" message with "200 OK" with or without 301 redirecting all site URLs to an error page, is a big bad idea.

DNS failure, server meltdown, etc will just timeout and return no website. Serving "can't connect to database" with "200 OK" is asking for trouble; serving 503 is much better. No idea if there is a definitive list.
5:37 pm on Sept 6, 2011 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Nov 16, 2005
posts:2650
votes: 85


@draftzero, that seems to imply that the page is not crawled for search purposes, which is not what the conversation above assumes. If that is really what they are doing, there is no problem.

@g1smd, part of the problem is that some CMSs get it wrong. I think Wordpress used to but it was fixed.

@lucy, on another thread you said your site was entirely static HTML, so you have nothing to worry about: I have never come across a web server getting it wrong, its badly written CMS's and scripts.
11:12 pm on Sept 7, 2011 (gmt 0)

Preferred Member

10+ Year Member

joined:Dec 12, 2004
posts:624
votes: 4


My code just caught 5 Google IPs. Request headers are;

 User-agent: urlresolver
Host: www.domain.com
Accept-Encoding: gzip


and IPs are 74.125.42.80/2/3/4/5

Any idea what "urlresolver" is for? Something like facebook url linter?
3:53 am on Sept 8, 2011 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member lucy24 is a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month

joined:Apr 9, 2011
posts:13841
votes: 485


There's a thread about it.

[webmasterworld.com...]
This 35 message thread spans 2 pages: 35
 

Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members