| 11:44 am on Oct 23, 2011 (gmt 0)|
|To answer the original question Mr Smith, I personally would estimate it to be the first 5 websites for terms from around 5k per month. |
I have had several sites subjected to these reviews. It has always happened when they broke into the top three, for 'good money' terms. My hypothesis is that it is the value of the search term that matters, not the number of searches.
| 1:29 pm on Oct 23, 2011 (gmt 0)|
Has Google created a new defination of Spam in this set of guidelines?
| 6:52 pm on Oct 23, 2011 (gmt 0)|
In 2010 Google ran 20,000 experiments and got final 500 implementation changes. I think evaluation from google quality raters is just one of their huge list of signals. These evaluation also use in machine learning process that they see the pattern of quality contents.
| 10:51 pm on Oct 23, 2011 (gmt 0)|
|Has Google created a new defination of Spam in this set of guidelines? |
It's pretty hard to create a "new" definition of a word that's as fuzzy in meaning as "spam" is. But they definitely have focused and refined what they mean by spam... and there seems to be at least two aspects: both the methods used to rank the pages and the content itself.
| 12:07 am on Oct 24, 2011 (gmt 0)|
I just finished reading a very interesting guide from end to end, for the second time. It appears to me that you end up being queued for a review based on something similar to the manual spam reports. At least that's what I get from it.
- Sites seem to be triggered or distributed based on a keyword, and the ranking of the website in need of a review. (Very similar to what a person submits in a spam report)
- Keyword stuffing, whoa?! They put the ball completely into the raters hands by saying something to the effect of: We ask you to assign a Spam flag if you think the number of keywords on the page is excessive and would annoy or distract the real user.
- Spam ratings (as defined by the document) are separate from quality ratings. It states that pages that are thin content or low quality should be marked as such: low quality, slightly relevant, off-topic or useless. IN ADDITION TO the Spam flag.
- Raters seem to be segmented into specialties or locales, or some other metrics, etc. So I get the feeling its not just a stay at home moms reviewing our stuff.
- Ads, specifically PPC text ads or "Sponsored Links" are being presented very negatively. Yes, they state that not all PPC text ads are bad, but that's like saying... "Red Widgets are known to cause cancer when you look at them... but not all are designed to cause cancer". Any sane person would simply assume its best to assume ALL.
- To be fair, it appears that it takes more than 1 person to list your page as spam. 1 person suggests that its spam, along with a brief and pointed reason, then 2 or more others in your node or team or whatever they call it, also review it.
I wish someone (Like the NY Times, or another mainstream rag) would take this entire document public. It has a "Proprietary and Confidential - Copyright 2011" statement in the footer. Surprisingly, Google itself indexes more than 4900 other pages (link [google.com]) with that same statement. Why are those documents OK for the Google index, but its own "Proprietary and Confidential - Copyright 2011" document deserves to be excluded?
Finally, if our web pages are being assessed by what I assume are our peers, why are we not made aware of the findings in either WMT or some other method? Good, bad or otherwise? If nothing more, it would bee nice if we had the ability to "request a rater review", so we had a better assessment of what Google expects.
| 2:05 am on Oct 24, 2011 (gmt 0)|
What qualifies any website for review is, apparently, showing up on a top page for one of the higher volume searches. Websites are evaluated in the context of a query, not just "on their own". So wouldn't any feedback would need to be on a query-by-query basis?
| 9:05 am on Oct 24, 2011 (gmt 0)|
|I wish someone (Like the NY Times, or another mainstream rag) would take this entire document public |
This is what is needed as at the moment it is just a few disgruntled webmasters shouting.
As for what qualifies a search term and website combo for review by EWOQ. I think it is a lot less than 50k, I have sites that performed OK building searches up nicely and one by one they have tumbled when they are reaching around 500 searches daily.
| 2:13 am on Nov 3, 2011 (gmt 0)|
|Many of these reviewers don't seem very technical and frequently get fired after poor performance reviews. |
Here's a current job ad for those reviewers (US applicants only):
|Leapforce is looking for highly educated individuals for an exciting work from home opportunity. Applicants must be self motivated and internet savvy. This is an opportunity to evaluate and improve search engine results for one of the world's largest internet search engine companies. |
Chance for WW members to do a better job?
| 2:31 am on Nov 3, 2011 (gmt 0)|
Thanks, anallawalla. The bottom of that page has an embedded YouTube video that discloses at least some of the process Google uses to determine and evaluate changes - including the human team evaluations.
How Google makes improvements to its search algorithm [youtube.com]
| 4:32 am on Nov 3, 2011 (gmt 0)|
|What qualifies any website for review is, apparently, showing up on a top page for one of the higher volume searches. Websites are evaluated in the context of a query, not just "on their own". So wouldn't any feedback would need to be on a query-by-query basis? |
Or could it also be based on the total number of blocked requests by the feature for logged-in users for a site as a whole?
|I wish someone (Like the NY Times, or another mainstream rag) would take this entire document public |
It is odd that no mention of it, as far I can tell, has made the mainstream media by now.
| 7:08 am on Nov 3, 2011 (gmt 0)|
I'm surprised nobody has said anything about the lengthy "off-topic or useless" section. In the version I downloaded it's hard to overlook, because it's all highlighted in painful yellow. Page after page of illustrative examples. So many possible wisecracks, so little time...
| 2:43 pm on Nov 3, 2011 (gmt 0)|
It is an interesting doc.
However, after reading this doc carefully, I am sure most of you think that it seems like Google doesn't count its software to locate and track poor quality content.
Human rating must be biased.
It reminds me when I was a student working in public surveys and how I was cheating/misleading after several surveys. Even not intentionally.
| 4:02 pm on Nov 3, 2011 (gmt 0)|
"Discloses" is a pretty big word for a PR video. To me it sounded more like they were just trying to say that there's some "human touch" in Google and not everything is decided by a machine (algo) or a nerdy engineer.
|discloses at least some of the process Google uses to determine and evaluate changes - including the human team evaluations. |
Also, the review process ( to the extent that they touch upon it at about 1:00 minute mark in the video) is shown as a tool for fine-tuning results of a query. I understood it as the reviewers having to pick SERPs they like better rather than review the actual sites that the SERP contains.
In other words, they present the human review as a tool used for something that's completely non-controversial: fine tuning of their own product (SERPs). Talking about reviewing other people's sites in my view would not make for a great PR video because they would want to avoid the notion of their employees having a say in whether an online business lives or dies. I think from PR stand point they would rather say that "the machine did it".
Anyhow, as far as I understand the OP's intent was the punitive aspect of the Google manual review. I've yet to see Google disclose that particular aspect.
In my own experience, the sites I've had penalized or banned outright have acquired SERP sitelinks just a few weeks before the event. It has not necessarily translated into more traffic than usual, but when I see a site's sitelinks in WMT no longer empty, I start worrying. Anyone else sees that or am I just being paranoid?
| 4:03 pm on Nov 3, 2011 (gmt 0)|
It contradicts itself so frequently that it's hard to blame the raters who at best can only keep a summary of the document in their heads. A lot of:
"If you see X, the site is evil."
and then a page later:
"Not all sites with X are evil"
Which one do you think the raters will remember?
| 8:16 pm on Nov 18, 2011 (gmt 0)|
Ok a word of caution about this ('cause I'm not sure if it is really hm), but someone calling themselves Matt Cutts posted on PotpieGirl's blog saying the following:
|Normally we do not comment on ranking methods but Iíll explain a misconception: input from manual raters is used only in the rarest of cases when a non-brand cracks the top ten for high value money terms. |
She hasn't yet responded - if it is him, I'm sure she'll try to let us know.
| 11:33 pm on Nov 18, 2011 (gmt 0)|
That comment sure has Matt's tone - and we know her blog is on the Google radar so I don't doubt it.
| 11:52 pm on Nov 18, 2011 (gmt 0)|
So, I guess that's where we are headed: no non-brands for high value money terms in the first 10? Interesting, not even a word about relevancy for the term. Well, at least he didn't mince his words this time.
|the rarest of cases when a non-brand cracks the top ten for high value money terms |
| 12:13 am on Nov 19, 2011 (gmt 0)|
I never had any doubt - if an unfamiliar site starts ranking for a big money term it will get a manual check. That doesn't mean it can't rank ever, but if the manual raters question the ranking as well, then its time on top will be limited.
Heck, if a domain's total SERP impressions shows a significant spike, then I'm pretty sure it's going to get a personal look, too - even if no really big money terms are involved. If I ran a search engine, I know that I'd demand that kind of vigilance.
| 1:35 pm on Nov 19, 2011 (gmt 0)|
Turns out it wasn't him see this conversation [twitter.com...]
| 4:03 pm on Nov 19, 2011 (gmt 0)|
To clarify the above... Matt Cutts did not make the comment attributed to him above. Another Matt... Matt McGee... caught the false attribution.
(Since hash tags break in WebmasterWorld link redirects, and some of the links in the above aren't permalinks, here's an abridged reconstruction with links or urls that will work....)
|Matt McGee November 18, 2011 at 9:12 pm |
It would be nice to get some confirmation about whether this is/was really Matt leaving a comment. The underscore in his name and the lack of an avatar makes me think itís not. But I'll welcome being wrong.
|Matt Cutts |
@mattmcgee deny--wasn't me. Thanks for spotting.
| This 50 message thread spans 2 pages: < < 50 ( 1  ) |