|Delayed indexing and ranking - due to Google raters?|
For the longest time whenever I wrote an article I was confident that it would be indexed for some desired keyword phrases within minutes, even seconds, of my posting it online. A quick check of those keywords often showed top 25 rankings and quite often I was able to pull off top 10.
Things are different now.
When I post an article online it is usually ranked in the #25 area of serps immediately if I've put any effort into it at all but occasionally it doesn't show up for hours. When it doesn't show up for hours one of two scenarios unfolds.
#1 - It shows up hours later in the 40-50 rank in serps
#2 - It shows up in the top 10 in serps.
Both scenarios are extremely predictable in that the article will not deviate from one of the two steps above much like an instant listing will never be top 10 for anything but long tail keywords.
Why? Could it be that when I post an article that Google's algo deem worthy of top 10 this triggers a raters flag and humans must look at it first? Do humans essentially control the sites in the top 10 to a greater degree than they ever have? I have no evidence of this but I can repeat the same observations like clockwork now, there is no deviation which implies, to me, that I need a human green light to visit page one with a fresh article. Could it be?
I don't personally think so - mostly because human raters are used to assess changes in the algorithm before they go live. They do not rate individual live SERPs.
In fact. trying to do that just wouldn't be a solution that scales very well. As I see it, Google would need at least hundreds of thousands of human raters to do something like what you describe.
At the same time, you definitely bring up a changed pattern, one that I'm sure is real for your site and probably many others. However, I have some sites I work with that are not seeing this pattern at all. Newly published articles for them still rank very fast, sometimes within minutes.
What type of search term are you looking at here? Is it an "exact title match", a 3-word phrase, a high volume keyword, or something else?
Maybe there's a change in QDF criteria.
I don't think that human raters will be effective enough in playing the role Google has been playing. I believe if nothing else the efficiency of search engine will never be the same if handled by human raters.
generally 3-4 word phrases. Most articles are still ranked extremely quickly just not in the top 10. When I get a slow to rank article I know I have a *shot* at prime real estate in the rankings a few hours or days later when it finally shows up.
Lets talk about scale a bit, since we're on the subject.
- Rating every page is NOT scalable, no argument there.
- Googlebot does the heavy rank lifting, the raters only answer the flagged pages.
- Rating pages that the algo has determined *would* rank in the top 10 potions for say medium traffic keywords, humanly possible?
It would all come down to how many raters do they employ vs how many pages per day a rater can handle vs how many pages googlebot deems might be ready for page one results.
You have to keep in mind that the top 10 results don't change that often so I don't suspect the number of sites the algo finds, per day, that are suddenly worthy of consideration is unattainable for human evaluators. It's not ALL pages or even all NEW pages that would trigger such a flag for the raters, serps just don't churn that quickly.
Does anyone have figures on the number of raters Google employs and/or outsources? Is it really out of the realm of possibility that humans DO clear flags on new entries to the top 10? Google did change how they handle long tail keywords, the results are far fewer sites which *might* have been to make human raters(human flag *clearers*) viable.
|- Rating every page is NOT scalable, no argument there. |
Not only every page - but every page for every potential query phrase. That's where the scale issue gets REALLY nuts.
Apparently, Google employees 30,000 folks
How many of them do they need to code the algo's, man the server farms,
I read somewhere that these folk also have a lot of slack time on their hands and a handy pc, android pad or phone, on super fast internal networks,,,
Anyway, i am never going to convince the Algo lurving members of www that sometimes manual is best an that Google share ma opinion on that too :)
Perhaps if I could persuade Amit or Matt to comment :)
But do keep in mind that it might also be part of some kind of an A/B or multivariate testing built into their algorithms to determine what their users like the best. I would believe this more than the "flags and human raters approval" theory.
Google does use human raters but I don't think it is in the way that you seeem to have explained here.
I think Mr. Charlton has a point, do these articles target "newsy" type subjects that are pertinent to current events? or the complete opposite?
Keeping a QDF change as a possibility, I also very much agree with tedster on this point with regard to the role of human raters...
|...human raters are used to assess changes in the algorithm before they go live. They do not rate individual live SERPs. |
Judging by the new Rater's Guidelines, in early stages they're tagging sites and pages which are considered in early test algorithms, before these are rolled out publicly. These are cross-confirmed and cross identified in a great many ways, including ways which are independent of the evaluators.
You wouldn't assign raters to documents, you assign them to queries ($), fairly easy to keep an eye on that, most webmasters/SEOs could attest to that..
|You wouldn't assign raters to documents, you assign them to queries... |
The question is when you assign them to anything that affects current results, and IMO there's quite a remove in this situation.
I frankly would not have even considered raters as a possible cause of this problem, but the OP chose to frame it that way, so I'm responding to the hypothetical issue raised.
A likely contributor - [webmasterworld.com...]