Forum Moderators: goodroi

Message Too Old, No Replies

Could 744 permissions on robots.txt cause problems

all other files are set to 644

         

dbar

3:57 pm on Jul 29, 2005 (gmt 0)

10+ Year Member



I'm at witts end after having every page completely dropped from googles index (PR0 all pages) several months ago. Googlebot still visits the home pages and robots.txt file every couple of days, but that's it. I asked a google engineer to look at it in N.O. but nothing has changed.

Anyway I just noticed the permissions on all my files are 644 except for robots.txt, which is 744. I wouldn't think this would make a difference but I'm no expert.

This is my robots.txt file:

User-agent: *
Disallow: /cgi-bin/

Am I missing something here?

ThomasB

4:03 pm on Jul 29, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I'd suggest removing it completely and see what happens. If GoogleBot starts spidering your entire site again, upload the file again and see what happens.

dbar

4:08 pm on Jul 29, 2005 (gmt 0)

10+ Year Member



Thanks ThomasB,
Just removed it so will wait and see.

jdMorgan

4:08 pm on Jul 29, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Validate it [searchengineworld.com], and then look elsewhere. As long as 'user' has read access (digit is 4, 6, or 7), that is not the cause of your problem.

Check your server response headers [webmasterworld.com] for all important pages and make sure they return either 200-OK or 301-Moved Permanently; Anything else (302, 401, 403, 404, 410) may affect your listings.

Check your logs to look for gaps that would indicate long server outages -- Unreliable hosting can cause the SEs to drop you (I think you would have noticed, though, if it was severe enough to cause SE problems).

Having eliminated the major technical-problem possibilities, then look into SEO aspects -- I heard there are a lot of SEOs around here... ;)

Jim

dbar

7:41 pm on Jul 29, 2005 (gmt 0)

10+ Year Member



Thanks jdMorgan,
I'm doing as you suggested and everything seems fine so far with validating and header checks. By header checks I would click the link, go to the broswer address bar and copy the url into the header check, all seems fine.

However, in looking at the raw html in the actual file I noticed a few links with white space like this:

href = "http://

When looking through view source the exact same links showed like this:

href =" http://

If this is the problem I'm going to be happy but kicking myself as I thought I found the problem awhile back with a 301 redirect where the redirected url had a white space like href =" http:// and caused a 500 error. The above errors noted are not redirects and I didn't catch it.