Welcome to WebmasterWorld Guest from 54.163.23.73

Message Too Old, No Replies

Old Robots.txt file used by Google - not newer version

     

doughayman

10:28 pm on May 7, 2010 (gmt 0)

5+ Year Member



Hi,

I have robots.txt file for a domain. The robots.txt file contains 230 Disallow statements, which are all valid syntactically. Googlebot routinely reads this file, and WMT indicates that the processing of it is "successful".

My problem is that an entry that was in an old version of robots.txt several months back, is getting blocked, when in fact, I do not want it to get blocked.

For whatever reason, it seems like this old version of robots.txt is actually being used by Google, despite the fact that I've made many changes to it over the last month, and it has been spidered by Google.

Is there a standard period of time that typically needs to elapse, before a new version of robots.txt becomes the defacto standard for the site ? Is there something that I can do to force Google to use this new version ?

Thanks in advance !

tedster

12:06 am on May 8, 2010 (gmt 0)

WebmasterWorld Senior Member tedster is a WebmasterWorld Top Contributor of All Time 10+ Year Member



Do you mean that WMT says the URL is blocked by robots.txt? Or do you mean that Google still isn't requesting that URL from your server?

If it's just the first, even though the new version has been spidered, then it may be only a reporting problem. But if googlebot isn't requesting the URL, that's a different situation.

doughayman

12:26 am on May 8, 2010 (gmt 0)

5+ Year Member



Ted, WMT says that the URL is blocked by robots.txt, even though I removed that explicit restriction in robots.txt several months ago.

tedster

12:31 am on May 8, 2010 (gmt 0)

WebmasterWorld Senior Member tedster is a WebmasterWorld Top Contributor of All Time 10+ Year Member



OK. So the next step would be "is this a buggy report?" In other words, is googlebot requesting the URL anyway - and is it indexed?

doughayman

1:23 am on May 8, 2010 (gmt 0)

5+ Year Member



Thanks, Ted. It looks like it is a buggy report for some of the "effected" URL's (they ARE being spidered and indexed), and for others, they are not being requested at all by Googlebot, and it has been several months since the "Disallow" clause for them has been removed from robots.txt. Once again, Google is extremely hard to figure out, and reliability here is mighty questionable. Thanks for your input. I was wondering if others have had similar issues, and if there was eventually resolve.