| 1:43 am on Sep 24, 2012 (gmt 0)|
I don't personally think so - mostly because human raters are used to assess changes in the algorithm before they go live. They do not rate individual live SERPs.
In fact. trying to do that just wouldn't be a solution that scales very well. As I see it, Google would need at least hundreds of thousands of human raters to do something like what you describe.
At the same time, you definitely bring up a changed pattern, one that I'm sure is real for your site and probably many others. However, I have some sites I work with that are not seeing this pattern at all. Newly published articles for them still rank very fast, sometimes within minutes.
What type of search term are you looking at here? Is it an "exact title match", a 3-word phrase, a high volume keyword, or something else?
| 4:51 am on Sep 24, 2012 (gmt 0)|
Maybe there's a change in QDF criteria.
| 5:28 am on Sep 24, 2012 (gmt 0)|
I don't think that human raters will be effective enough in playing the role Google has been playing. I believe if nothing else the efficiency of search engine will never be the same if handled by human raters.
| 7:17 am on Sep 24, 2012 (gmt 0)|
generally 3-4 word phrases. Most articles are still ranked extremely quickly just not in the top 10. When I get a slow to rank article I know I have a *shot* at prime real estate in the rankings a few hours or days later when it finally shows up.
Lets talk about scale a bit, since we're on the subject.
- Rating every page is NOT scalable, no argument there.
- Googlebot does the heavy rank lifting, the raters only answer the flagged pages.
- Rating pages that the algo has determined *would* rank in the top 10 potions for say medium traffic keywords, humanly possible?
It would all come down to how many raters do they employ vs how many pages per day a rater can handle vs how many pages googlebot deems might be ready for page one results.
You have to keep in mind that the top 10 results don't change that often so I don't suspect the number of sites the algo finds, per day, that are suddenly worthy of consideration is unattainable for human evaluators. It's not ALL pages or even all NEW pages that would trigger such a flag for the raters, serps just don't churn that quickly.
Does anyone have figures on the number of raters Google employs and/or outsources? Is it really out of the realm of possibility that humans DO clear flags on new entries to the top 10? Google did change how they handle long tail keywords, the results are far fewer sites which *might* have been to make human raters(human flag *clearers*) viable.
| 12:35 pm on Sep 24, 2012 (gmt 0)|
|- Rating every page is NOT scalable, no argument there. |
Not only every page - but every page for every potential query phrase. That's where the scale issue gets REALLY nuts.
| 1:43 pm on Sep 24, 2012 (gmt 0)|
Apparently, Google employees 30,000 folks
How many of them do they need to code the algo's, man the server farms,
I read somewhere that these folk also have a lot of slack time on their hands and a handy pc, android pad or phone, on super fast internal networks,,,
Anyway, i am never going to convince the Algo lurving members of www that sometimes manual is best an that Google share ma opinion on that too :)
Perhaps if I could persuade Amit or Matt to comment :)
| 2:19 pm on Sep 24, 2012 (gmt 0)|
But do keep in mind that it might also be part of some kind of an A/B or multivariate testing built into their algorithms to determine what their users like the best. I would believe this more than the "flags and human raters approval" theory.
Google does use human raters but I don't think it is in the way that you seeem to have explained here.
| 5:23 pm on Sep 24, 2012 (gmt 0)|
I think Mr. Charlton has a point, do these articles target "newsy" type subjects that are pertinent to current events? or the complete opposite?
| 6:07 pm on Sep 24, 2012 (gmt 0)|
Keeping a QDF change as a possibility, I also very much agree with tedster on this point with regard to the role of human raters...
|...human raters are used to assess changes in the algorithm before they go live. They do not rate individual live SERPs. |
Judging by the new Rater's Guidelines, in early stages they're tagging sites and pages which are considered in early test algorithms, before these are rolled out publicly. These are cross-confirmed and cross identified in a great many ways, including ways which are independent of the evaluators.
| 6:32 pm on Sep 24, 2012 (gmt 0)|
You wouldn't assign raters to documents, you assign them to queries ($), fairly easy to keep an eye on that, most webmasters/SEOs could attest to that..
| 5:47 am on Sep 25, 2012 (gmt 0)|
|You wouldn't assign raters to documents, you assign them to queries... |
The question is when you assign them to anything that affects current results, and IMO there's quite a remove in this situation.
I frankly would not have even considered raters as a possible cause of this problem, but the OP chose to frame it that way, so I'm responding to the hypothetical issue raised.
| 7:15 am on Oct 4, 2012 (gmt 0)|
A likely contributor - [webmasterworld.com...]