| 2:34 am on Sep 15, 2008 (gmt 0)|
This brings to mind the phenomena we discussed in another thread: Position 4 Gets Strange [webmasterworld.com]. Particularly because you mentioned a cycle of "flashing"; between #4 and #14, or #3 and #13. Position #4 is particularly sensitive, since it's one of sposts where a universal search result can be forced (excuse me, blended) into the SERP.
That earlier discussion missed the human review component that you mention seeing, however.
In the case of very competitive phrases, the editorial squad does evaluate the first page SERP - and if there's agreement across several independent evaluations that a given url is ranking unnaturally high, it can get demoted. The flat value of "minus 10" would be a new twist here, however.
The "unnatural backlinks" observation would line up with the earlier discussion, to a degree at least. I know of several sites that intentionally went after ranking on a competitive phrase and all of a sudden, they were stuck on page two when it seemed that, without human intervention at least, they "should be" on page two.
| 4:18 am on Sep 15, 2008 (gmt 0)|
we were at the top for a hear now, recently dropped to 4/5 ..... strange to me
| 12:20 pm on Sep 15, 2008 (gmt 0)|
b2net i have been penalized for a specific keyword phrases also that are quite competive, my site is highly relative for those keywords and the keywords are in the domain name. I did a lot of link building using the penalized phrase before, my suspicion was that because a large percentage of my backlinks contained those phrases then that was probably why I was 950'd because I'm nowhere to be seen otherwise it's a manual penalty by a human reviewer but then why 950 me when my site is highly relavent, so I'm leaning towards an automated penalty, does that make sense?
|Martin Ice Web|
| 1:26 pm on Sep 16, 2008 (gmt 0)|
b2net and tedster,
why should google penalize a site with 10+ pos. by human reviewing if its clean and by its relevant content should rank an first page?
That does not make sense! Cause google wants to offer the most relevant results?
| 5:39 pm on Sep 16, 2008 (gmt 0)|
The human evaluators might feel that other URLs make better offerings to the end user - especially giving consideration to whatever Google has learned about user intention for that paraticular query.
| 12:31 am on Oct 1, 2008 (gmt 0)|
Today I've lost one fairly major term that I have had for almost 2 years... Still a lot of other terms remain and long tail variations of this term continue to drive traffic. Very strange and I'm hoping it's a glitch as my site is very relevant.. Oh yeah and the site that is now number 1 is a pile of fraudulent trash...
| 4:40 am on Oct 1, 2008 (gmt 0)|
It is not strange that G has this new algo. For a nice webmaster, you should be ready for any listing in serp. Who would advertise adward in G if your listings are all the time number one or two ? The idea is so simple even if your site is very clean in google standard.
| 4:53 am on Oct 1, 2008 (gmt 0)|
Let's focus back on the topic that the original post set out - a one page drop after a human review. There's is too much complexity in todays' Google to mix in anything different.
The only way to know if your 1-page drop falls into this category is to watch your server logs for Google IP addresses that aren't googlebot. If you're not watching your server logs and you are strugging with ranking problems, this might be a good step to add to your toolkit.
| 9:37 am on Oct 1, 2008 (gmt 0)|
Mine came back after a few hours - glitch.
| 9:11 pm on Oct 1, 2008 (gmt 0)|
This is probably a dumb question, but how do you know if a manual reviewer has visited your site? Do you just look for referrers that come from the google domain with a URL that includes "rating task" in it?
| 9:51 pm on Oct 1, 2008 (gmt 0)|
Martin Ice Web, you've been deceived into thinking that the Google machine algorithm represents "relevance." It doesn't. It's simply a mathematical function that (in the Googletechs' opinion) often represents an approximation of relevance.
The page [vuw.ac.nz...] contains one independent observer's list of things that should matter when evaluating the "quality" of a website. I would specifically emphasize "Scope", since it is an often-overlooked but significant criterion in Google's list, and "Content", since that covers various issues which a computer can't possibly address -- but which human reviewers do. The computer doesn't know the difference between the American Cancer Society and the various boiler-room fraud fundraisers with names of the form adverb-diseaseName-collectiveNoun. The computer doesn't know the difference between Joe Blow (plagiarist extraordinaire and serial doorway-page spammer) and professor Joseph Blough (Nobel Prize winner and global philanthropist). But people know people who know people who DO know the difference. The computer can't spot the difference between Eliza'd plagiarized paragraphs and genuine coherent thought (well, some people can't either, but there's always a Verity Stob who can.) Well, these are the extreme cases: most of us fall between them, but when human reviewers get involved, we all are subject to those kinds of judgments.
Which are far more relevant than the results of any computer algorithm, no matter how complex.