| 3:47 pm on Jun 22, 2005 (gmt 0)|
When Keyword rank drops validate all your pages first, and not just W3c.org (it doesn't catch broken table tags and other broken tags). Also run a link checker on our site.
If you're concerned about over optimization do a keyword density check to make sure you haven't focused too much on particular phrases/words. Anything over 20% is suspect (Google doesn't use the meta keyword tag so eliminate that from your searches but do include the meta description tag).
The exclusion for Googlebot re images improved traffic for a new site I installed it on when the only traffic it was getting was google images. so that is probably not the reason for your fall in rank.
Re hijacks -- Google Guy said to look at the site command. Hijacking URLs will appear there and not in allinurl/inurl commands. Those are probably tracking codes.
Those scraper sites that steal your content and title can indeed hurt your rank however, so best to try and get it removed.
I wrote an article on how to stop hijackings which you can find from my profile.
| 3:52 pm on Jun 22, 2005 (gmt 0)|
Just how do you do a keyword density check anyway...manually counting? Or is there a tool to use? The % figure at which you arrive, is that counting every single word on a page including the meta tags, then counting how many times the key word(s) appear in the tags and the page's body, then getting the % from that? Or is it some other method?
| 4:15 pm on Jun 22, 2005 (gmt 0)|
My validation is near perfect - I am a web developer by trade.
I'm completely baffled because I don't employ black hat (at least I didn't do - maybe the rules have changed) and while everyone was having problems since mid-May my site was slowly moving up the SERPs for all manner of keywords and keyword combinations. It's only in the last few days that things have gone pear-shaped.
Visits from the googlebot have been sparse during this time but I noticed that today the bot has been spidering a great deal. Maybe I was forgotten about - hopefully, once the bots have been and gone things will improve.
My gut feeling is that all this movement is purely to increase Adwords revenues from those shaken out.
| 1:47 pm on Jun 23, 2005 (gmt 0)|
Bump for msg #3. ;)
I found a "Keyword density check" tool, and it said "Keyword density is terrible" and that it was 25% on the page I tested. I assume it's saying that is too low since I tried another page that was 14% and it said the same thing. HUH? I heard that something like 25% is way too high!? So what is the correct amount?
| 3:17 pm on Jun 23, 2005 (gmt 0)|
This morning I noticed a referral from a scraper directory firing 302's like there is no tomorrow, madroll dot com.
I am really worried that my site is going to tank again out of the blue at the next update.
I know GG stated that a site: command would reveal hijacks, but when my site tanked, all my backlinks had been attributed to the phantom page Google created for the script link. And that didn't show up in the site: command at all.
I repeat: a link:http:\\directory.com\php?mywebsite showed exactly my backlinks.
So, no, I'm not reassured. I'd really like some very firm reassurances that this is NOT going to happen again, to me or anyone else whose site gets picked up by these parasites.
| 6:02 pm on Jun 23, 2005 (gmt 0)|
if I had a inbound link with my title and description showing up in allinurl or anywhere in the SERP's I would nuke first ask questions later.
What googleguy said is that they have 'filtered' them out of the site: results. You read into that what you want.
| 6:29 pm on Jun 23, 2005 (gmt 0)|
What this critter reads into that is they are now hidden from folks looking in site: searches.
Sort of like test accounts in real production databases. They just sit there till all he¦¦ breaks loose. Like various regulators finding out about them in insurance company databases or bank databases or credit card databases, etc... and more etc...
I can name some companies, however you can all do searches.
| 9:38 am on Jun 24, 2005 (gmt 0)|
When did GG say Google was filtering out the site: results? Since he also stated that the site: command was the only way to find a hijack, does that mean that Google has rendered us incapable of telling whether we've been hijacked, or identifying our hijacker?
I must understand incorrectly.
| 10:48 am on Jun 24, 2005 (gmt 0)|
msg# 105 is the one googleguy talks about the filtering. he didn't actually say 'filtering' but read the posts afterwords - we got no reply about it so we ended up assuming he meant 'filtering' the site: results.
google HAS made a lot of changes since then - I'm not even sure if the 302 hijack exists anymore.
| 11:02 am on Jun 24, 2005 (gmt 0)|
|I'm not even sure if the 302 hijack exists anymore. |
We are busy testing just that, under controlled conditions. A large number of "victim" pages has been created for the sole purpose of carrying out the experiment, on a non-sandboxed domain. We are waiting for them to rank. When they do, we will attempt to hijack them in a myriad of ways, and see whether there is an effect on rank.
Maybe hijacks exists, maybe they don't, but we'll find out once and for all.
We don't know when we will have results, because we do not know the timeline of execution of a googlejack. It might depend on which googlebot picks up the attacking link, the schedule of duplicate content filter application, who really knows? We might see no effect until the next update or two.
We'd love to be wrong.