Forum Moderators: open

Message Too Old, No Replies

Google not spidering us properly

Google is hitting our robots.txt and Default.html page every day ..

         

WebMaven

6:57 pm on Nov 25, 2004 (gmt 0)

10+ Year Member



Here are our metatags:

<meta name="keywords" content="keys here">
<meta name="description" content="desc">
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="ROBOTS" content="INDEX,FOLLOW">

Here is our robots.txt file:

# Allow all
User-agent: *
Disallow:

Every day, the Google spider is coming by and grabbing our robots.txt, and then grabbing our index.html, and then not going any further.

Can anyone think of why this might happen?

The Yahoo Slurp spider is hammering our site spidering everything just fine.

jatar_k

10:07 pm on Nov 25, 2004 (gmt 0)

WebmasterWorld Administrator 10+ Year Member



as opposed to having Disallow:

if you are allowing everything for everyone just upload an empty file, I don't think that syntax is quite right. As with all directives for robots.txt, you only tell them what they can't do. It runs on the assumption that the base rule allows them access to everything= and then excludes from there.

WebMaven

12:20 am on Nov 26, 2004 (gmt 0)

10+ Year Member



Jatar,

Just updated robots.txt on our site with a 0byte file. We'll see if Googlebot likes that better.

Anyone have any other ideas that might work?