| 8:15 pm on Jul 9, 2008 (gmt 0)|
|"less than ideal results for any query" |
From Google's point of view, they likely want to deliver the best result first: or certainly within the top 3 results.
I don't think I've heard anything specifically about how Google judge if results are 'ideal' or not, but I would speculate that a lot of user data is involved. I imagine the worse case scenario for a search is if the user does not click on any results at all, especially if they click through multiple result pages. Of course, this has to be weighed against the fact that there are bad search queries, as well as bad results. "less than ideal" search queries are one of the reasons Google offers 'related searches' for many words and phrases.
Another factor may well be use of the back button: if a user clicks a result and then is straight back to results to click a different one, the chances are high that the results didn't match their search.
And of course, there are also obviously poor results like spam, which amongst other methods can be identified by direct user reports (e.g. the "dissatisifed" link) and by the people Google use to specifically identify spam within results.
I imagine that clear patterns emerge from the substantial user data Google has, enabling a pretty reliable judgement of whether a particular result 'tweak' improved user satisfaction or not.
| 10:57 pm on Jul 9, 2008 (gmt 0)|
Google also uses a group of human editors to evaluate the SERPs. The help wanted ads for these Google positions have been visible off and on for several years.
No one human editor can directly change the ranking of a url. Instead, a number (say, 5 per SERP) work on rating the same SERP independently from each other. If a URL in some position is seen as either better than ranked or worse than ranked by all the raters (according to the specific criteria they are trained to use) then that human input can be integrated back into the algo and either raise or lower ranking for that url.
Here's our thread on the human editorial input patent [webmasterworld.com]. And here's a related thread about Google Quality Raters [webmasterworld.com].
| 7:00 pm on Jul 14, 2008 (gmt 0)|
10 ranking changes per week... You could never understand it... You can't keep up... Just pay for ranking... "There are no droids here"...
| 9:03 am on Jul 16, 2008 (gmt 0)|
90% of the search algo factors are 90% stable and it takes only 10% time to learn it. So its good to ignore the other 10% which takes 90% of your time and focus on building your business. Let the 10% be studied by SEO companies :).
I was talking to a SEO firm and they said they have application running to check the algo changes and so on. I wonder why people need to do it.
For me its simple, "Make it talkable and you will win".
| 11:26 pm on Jul 16, 2008 (gmt 0)|
|In the patent, the human editorial input is used in a very creative and algorithmic way. [webmasterworld.com...] |
Tedster - Can you explain "creative and algorithmic way "
| 12:16 am on Jul 17, 2008 (gmt 0)|
1. Give the SERPs for a specific query to a group of human editors, trained in Google's rating criteria and working independently. I've heard a rumor that a group was made up of five individuals. Unverified, of course, but five makes good sense statistically.
2. Get their ratings for each url that appears in that SERP - whether it's really good for the user, should be higher, or should be lower.
3. If every one of the editors agrees, then flag that domain with a factor to raise or lower it for that particular SERP. If there's an obvious disagreement (especially is/isn't spam) then kick that up to a supervisory level.
4. Rinse and repeat.
5. Also take that feedback and see how the algorithm might be made stronger.
| 12:59 am on Jul 17, 2008 (gmt 0)|
Do you mean [ SERP ] as an industry or subject vertical, or a regional SERP [ e.g. google.co.uk ] ?
| 1:38 am on Jul 17, 2008 (gmt 0)|
The patent makes it pretty clear - the editorial input is for the rankings on a specific query term (Search Engine Results Page = SERP]. If you don't rank for important terms in your industry, you won't get inspected ;)
| 3:05 am on Jul 17, 2008 (gmt 0)|
Not from any patent but just some logical thoughts. If a website produces some negative signals (may be because of bad linking patterns, content issues etc) then can it get inspected manually before triggering some filters.
I remember it happening with Yahoo once. One of my friend was ranking in top 10 for very very high competition. All of sudden they were thrown to nowhere and rest of the SERP remained the same (So not on a basis of query term, looks site-specific). After talking to some internal Yahoo guys we came to know that it was a manual filtering. Only after a series of mail exchange, they could get it removed.
| 5:05 am on Jul 17, 2008 (gmt 0)|
|If you don't rank for important terms in your industry, you won't get inspected ;) |
I wonder what the top / subjects / industries are, that would be most prone to ranking inspections. Any ideas ?
[edited by: Whitey at 5:14 am (utc) on July 17, 2008]
| 5:20 am on Jul 17, 2008 (gmt 0)|
Rough guess (very rough) - they'd look at the top 70% of all queries by volume, rather than doing an industry specific thing. They also might look at any "burstiness" that had not been checked before, or recently enough. From what we can tell, Google has thousands of human evaluators, and that can mean a lot of human input is possible.
I also think they're working on statistical means to back up (and eventually replace a lot of) the human editorial input.
Along the lines of the need for human editorial input, I recently read some heavy math that demonstrated pretty conclusively that search as a whole is not a process that submits to ordinary statistical analysis - standard deviations and all that jazz. In other words, AI as it's normally conceived of is not going to take over here, because the model keeps getting surprised too often. That may be why we hear about Google-stlye analysis working on "no modeling at all [webmasterworld.com]".
There was an interview with a top Google employee earlier this year where he said that they have a machine-learning version of the algo running in parallel with the live one that gets intensive, eyes-on checking. And they don't feel that the machine learning version is up to the job, as of now, so it doesn't run live.
I've been wondering recently if all this cycling we hear reported might be an artifact of trying to integrate some machine learning into the live SERPs. No one's likely to tell us, of course, but it's fun for me to think about anyway - I can't stop myself, in fact.