Welcome to WebmasterWorld Guest from 126.96.36.199
I used the analyze robots.txt tool -robots.txt file is error free; Although it was last downloaded on June 22. I don't understand why Google would be having a problem downloading it now - hasn't changed in years. Google's definition for unreachable is somewhat vague and and I'm not sure where to go from here.
I've looked at the logs and everything seems to look ok with a 200 response for each googlebot request. But on the 23rd of June and up to present, Googlebot requests the robots.txt file and then leaves and doesn't download any files.
I've contacted my host to check to see if they're doing any IP Blocking and they say they're not, but will look into it further.
Is this a glitch? A penalty?
What I find in the unreachable urls section are 2 pathways for the same page :/cgi-local/softcart.exe/123.html?E+scstore and /123.html
I thought Googlebot doesn't execute JS. In years past, I have added Disallow: /cgi-local/ to my robots.txt file which solved that issue.
I have other ecommerce sites running with the same ecommerce software without a hitch.
Is Googlebot ignoring robots.txt and then considering this duplicate content?
Should I remove the /cgi-local/softcart.exe urls that are listed in WMT?
How can I further test whether googlebot is really having a problem reading my robots.txt file or its some other problem?
My ecommerce software uses JS to start it and injects a adding pathway through the cgi-bin
I guess the next steps is to figure out whether there are unrelated (to the robots.tx file) possible problems that would trigger this error.