A little tidbit for those of us who are curious about Google's human evaluators (employed in many countries and languages, by the way.)
The only human evaluation I was certain of before was the first one. The idea was that a group of independent editors would each evaluate a (relatively competitive) SERP and give their suggestions for whether a URL in the result was too high, too low, or just right. Only if there was agreement across these independent evaluators would a plus or minus factor get integrated into the ranking for that URL. Differences of opinion would get kicked up to a supervisor level.
These other two approaches would need a different statistical method if the end result is a tweak of the algorithmic rankings. They don't seem to be directly mentioned in the Human Editorial Input Patent [webmasterworld.com].