Forum Moderators: open
I've written to Google about it and the response was:
"Thanks for your email. Your site has not been penalized by us. Your site is in our index"
But since GoogleBot refuses to visit my site - I'm sure that my site is penalized for having repeated keywords (although keywords deleted long time ago but still appear in the Google Cache).
Any ideas what's going on? Could Google be wrong?
Leif
Leif, did you check your logs manuallly? Does your site get spidered by other bots? Your robots.txt okay?
Leif, did you check your logs manuallly? Does your site get spidered by other bots? Your robots.txt okay?
No, all visits from GoogleBot is logged to a specific logfile.
My site is spidered from other bots, yes.
I have no robots.txt file.
I have plenty of inbound links - even more now than before (january)
Leif
Having said that, however, with the available data I see no reason to believe that your PR5 site is carrying a penalty. Penalties usually seem to take the form of PR reduction rather than constant PR with reduced spidering. As others have suggested, keep the content and links coming and things should sort themselves out.
As others have suggested, keep the content and links coming and things should sort themselves out.
Rogerd you are clearly experienced with SEO and I am only a newbie, but to suggest waiting longer when he has already waited two months without any sign of googlebot seems a little off to me.
Considering the cached version is extremely old as well, it is clearly having a big negative influence on what is probably a commercial site. Imagine putting in months of hard work for no reward from your primary source of new visitors, and no signs that things will change.
If it was me I would hope that the google staff handling email have fallen in to the cut and paste routine as rogerd mentioned. Keep persisting sending clear, consise, and detailed emails to google then hope you strike a tech willing to investigate further.
Good luck.
My _guess_ at the problem is that it is not a penalty. I would suspect a problem on the host (surely couldn't be a firewall rule, could it?), or a glitch at google.
Even by simply waiting and hoping googlebot comes back falls into the old tech saying - problems that go away by themselves tend to come back by themselves. Find the issue and resolve it.
Now originally this site did get connected to some bad neighborhoods by some links that I neither wanted nor solicited.
With Google the site is a Pr3 and has not moved from that in 2 months, BUT neither has it been blown out of the index either. I just get the feeling that Google is busy, and that if it has time it'll get to my site.
I am trying to encourage it with the usual moves (links, changes, new content), but at the end of the day my business is not dependent on Google for its success and therefore I don't lose too much sleep over it. People still come to it.
Three small questions:
1. Do you have a lot of dynamic pages on your site? Think about this situation. A *.php or *.asp etc file is not changed. But the data in your database does change. If G checks the date the php file is last changed, the server sends the old date to G, not reflecting the changes to your database. I read some time ago on WW about this problem.
2. Do you have an 'Expires' meta in your files?
3. Can you try to get an outside link to one of your new pages? If this link is found by G and this page is spidered or not, it can help you to understand what is going wrong.
This is how my headers used to look:
hdr>HTTP/1.0 200 OK
hdr>Server: Apache/1.3.26 (Unix) AuthMySQL/2.20 PHP/4.1.2 mod_gzip/1.3.19.1a mod_ssl/2.8.9 OpenSSL/0.9.6g
hdr>Expires: Wed, 12 Mar 2003 19:49:54 GMT
hdr>Date: Thu, 13 Mar 2003 19:49:54 GMT
hdr>Pragma: no-cache
hdr>Cache-control: no-cache,no-store,must-revalidate
hdr>Content-Type: text/html; charset=ISO-8859-1
I removed the following headers some time ago (don't know the date):
hdr>Pragma: no-cache
hdr>Cache-control: no-cache,no-store,must-revalidate
However, since I'm logging every request I should have noticed if GoogleBot did a request. Even though they were just looking at the headers, right? Or: Is it possible that GoogleBot just du a HEAD to every page before it fetches it - at a later time? But doesn't HEAD makes the server to generate the whole page? I'm confused.
Leif
[webmasterworld.com...]
[webmasterworld.com...]