| 8:56 am on Jan 19, 2004 (gmt 0)|
is your robots.txt correct in its syntax. You may be at a fault, not google.
| 9:31 am on Jan 19, 2004 (gmt 0)|
I thought about that and had another look. This is what it looks like, and there's really not that much to do wrong in those two lines :?
| 9:36 am on Jan 19, 2004 (gmt 0)|
| 9:56 am on Jan 19, 2004 (gmt 0)|
Thanks, but it validates just fine.
| 10:12 am on Jan 19, 2004 (gmt 0)|
|No errors detected! This Robots.txt validates to the robots exclusion standard! |
So is there any truth to google not obeying the robots.txt file?
This could also explain why our site has lost any relevance on google with no top 100 position for all the relevant keywords we optimized for.
The site is optimized for 2 different resolution (800x600 & 1024x768) and hence there is duplicate content, but the duped pages/folders are disallowed in the robots.txt file.
| 10:31 am on Jan 19, 2004 (gmt 0)|
It would appear so, because our disallowed pages are happily indexed.
| 3:37 pm on Jan 19, 2004 (gmt 0)|
There's a similar discussion at [webmasterworld.com...]
Welcome to WebmasterWorld voice220! :)
| 4:12 pm on Jan 19, 2004 (gmt 0)|
Thanks DaveAtIFG, the above link explained a lot, and may just have solved my dilemna. :)