Welcome to WebmasterWorld Guest from 126.96.36.199
These evaluators are carefully trained and are asked to evaluate the quality of search results in several different ways.
1) We sometimes show evaluators whole result sets by themselves
2) or "side by side" with alternatives
3) in other cases, we show evaluators a single result at a time for a query and ask them to rate its quality along various dimensions.
Offical Google Blog [googleblog.blogspot.com] [numbering added by tedster]
The only human evaluation I was certain of before was the first one. The idea was that a group of independent editors would each evaluate a (relatively competitive) SERP and give their suggestions for whether a URL in the result was too high, too low, or just right. Only if there was agreement across these independent evaluators would a plus or minus factor get integrated into the ranking for that URL. Differences of opinion would get kicked up to a supervisor level.
These other two approaches would need a different statistical method if the end result is a tweak of the algorithmic rankings. They don't seem to be directly mentioned in the Human Editorial Input Patent [webmasterworld.com].
Otherwise, you're getting into human bias issues - and even those with PHD's have them.
So, now you have someone (subconsciously) choosing one site over another based on color or layout or logo, instead of content and ease of use.
Or at least this is what we were told...
Also, you have to disclose all your web properties before you can be accepted into the position, and they do their homework to verify.
Beyond that, you never act alone in the process of rating the quality of the results. Your "vote" is always compared randomly to many other votes on the same content to make sure you are not being bias...
The general thinking is that if 10 people say something is good, and one person says it is bad... that person's vote is really not counted... They had some interesting moderation techniques in place that would resolve such problems.
Personally, as a webmaster myself, I was very impressed with the process, and felt it was definitely fair and had adequate checks and balances.
btw thanks for the blog post Scott.
But note that this is not the 'site' evaluation every other webmaster here fears so much, it's SERP evaluation... basically QA
It's about the process of:
'Algo tweak/layout testing, Step 1.5: closed beta'
Of course with the system in place it could be used for many things.
Mind you, if a site was cr@p enough to get a manual penalty ( accidents, hacks, user spam aside )...
...i hardly have any sympathy for it and am glad to see it out of the index.
So is this like tasting ice cream for a living? Where do I sign up?
It is not all that easy... =) I did it for about 3 months a few years back. It was fun for about the first week, then it got really boring and tedious.
They try and offer you a number of projects you can work on, so you can mix your time up... But for folks like most of us (webmasters) you might find the work very boring after a while.
For the pay, it was not worth it to me to keep the job. Of course at the time I was working a full-time job (corporate america), running my websites AND doing the Quality Rater job... that could have been why I had no time for it.
Perhaps now that I no longer work in corporate america it would serve as a nice mind-numbing escape for a few hours a day... ;-)
It is an interesting read at 44 pages long. I printed it the day it was made public last year I believe and read it when time permits. It is worth the read!
Most definitely worth the read. It's a very well thought out system, I feel, and has led me to be much more demanding of writers who create content for the sites I work on.
So what happens when a site is flagged as non relevant and gets demoted in it's ranking, so then the webmaster decides to make their page more relevant by adding more content, does google pick up the change and notify this human team to re-evalute?
Important question. Once you have been flagged, is there any way to re-improve your rankings again? Will the system automatically pick up the changes, will the page be evaluated again after changes have been made and noticed, or do people need to fill in a reinclusion report?
Anyway, I believe this these flags are directly related to the -5, -10, -30, -950 penalties. When your page is relevant, you won't notice anything, unless another page is more relevant or maybe even vital. But when it comes to the flags 'useful', 'relevant', 'not relevant' and 'off topic', i strongly believe they are directly related to these 'mysterious' penalties. Off topic? Off you go down the drain to -950. Not relevant? Then no need to be in the top 30 for that word, so -30.
Thats my opinion....
My experience is that the -950 is more of a "too perfectly relevant" problem - an over-optimization penalty. You may well be right about some of the others, however.
True, over-optimization also can be a factor that comes into play when you're dealing with the -950 penalty. Maybe i should have used the minus 5, 10, 30 and 60 instead in this case ;-)
Nevertheless, i've seen these kinds of 'penalties', and in many cases a human evaluator visited the pages short before the rankings dropped down a page or 2, 3 in the SERPS.
There can also be a manual visit from an engineer on Google's search quality team - rather than the human editorial army. They would be checking on something that was flagged for inspection by any number of means - algorithmic, competitor reports, and so on.
This search quality engineer has a lot more clout and might be able to instigate some action as an individual, subject to a supervisor's approval of course.
What I weant to emphasize is that the "patented" human editorial input approach is not the only cause of a manual inspection visit showing up in your server logs.
You are right about the two types of evaluation, tedster. They do have their teams across the world, and the individual Google engineers. From what i have been seeing in our logfiles, the evaluators not only come from Google IP's, but I also see other, non-Google-IP adresses from which the evaluators do their thing. Their useragents are diverse, and, for what i can see, only a few of them have the right to 'pull the trigger' and finally flag your webpage, or entire site. And when they flag it to be spam, i certainly believe that Google Engineers will then come by and pay your site a visit to see what they can do to improve their algorithm.