Welcome to WebmasterWorld Guest from 188.8.131.52
Introduction to Google Ranking [googleblog.blogspot.com]
I thought this principle of "no query left behind" was interesting but would love to find out more how they determine what "less than ideal results for any query" are.
Do they monitor specific keyword phrases and use those as benchmarks? Are they low volume searches? Are they numerous queries for the same keyword phrase? What indicators do they use to modify their results?
What factors do we know about?
An old NY Times article gave an example for local results but curious what other factors are mentioned that I have missed.
Curious if the upcoming posts will have anymore insight.
I was also curious what the impact is of their "We make about ten ranking changes every week". I don't seem to notice these changes much for my sector and was curious if they are for concentrated niches or area specific results.
Curious if anyone has any insight into the matter.
[edited by: Robert_Charlton at 6:10 pm (utc) on July 9, 2008]
[edit reason] added Permalink [/edit]
"less than ideal results for any query"
From Google's point of view, they likely want to deliver the best result first: or certainly within the top 3 results.
I don't think I've heard anything specifically about how Google judge if results are 'ideal' or not, but I would speculate that a lot of user data is involved. I imagine the worse case scenario for a search is if the user does not click on any results at all, especially if they click through multiple result pages. Of course, this has to be weighed against the fact that there are bad search queries, as well as bad results. "less than ideal" search queries are one of the reasons Google offers 'related searches' for many words and phrases.
Another factor may well be use of the back button: if a user clicks a result and then is straight back to results to click a different one, the chances are high that the results didn't match their search.
And of course, there are also obviously poor results like spam, which amongst other methods can be identified by direct user reports (e.g. the "dissatisifed" link) and by the people Google use to specifically identify spam within results.
I imagine that clear patterns emerge from the substantial user data Google has, enabling a pretty reliable judgement of whether a particular result 'tweak' improved user satisfaction or not.
No one human editor can directly change the ranking of a url. Instead, a number (say, 5 per SERP) work on rating the same SERP independently from each other. If a URL in some position is seen as either better than ranked or worse than ranked by all the raters (according to the specific criteria they are trained to use) then that human input can be integrated back into the algo and either raise or lower ranking for that url.
I was talking to a SEO firm and they said they have application running to check the algo changes and so on. I wonder why people need to do it.
For me its simple, "Make it talkable and you will win".
2. Get their ratings for each url that appears in that SERP - whether it's really good for the user, should be higher, or should be lower.
3. If every one of the editors agrees, then flag that domain with a factor to raise or lower it for that particular SERP. If there's an obvious disagreement (especially is/isn't spam) then kick that up to a supervisory level.
4. Rinse and repeat.
5. Also take that feedback and see how the algorithm might be made stronger.
Not from any patent but just some logical thoughts. If a website produces some negative signals (may be because of bad linking patterns, content issues etc) then can it get inspected manually before triggering some filters.
I remember it happening with Yahoo once. One of my friend was ranking in top 10 for very very high competition. All of sudden they were thrown to nowhere and rest of the SERP remained the same (So not on a basis of query term, looks site-specific). After talking to some internal Yahoo guys we came to know that it was a manual filtering. Only after a series of mail exchange, they could get it removed.
I also think they're working on statistical means to back up (and eventually replace a lot of) the human editorial input.
Along the lines of the need for human editorial input, I recently read some heavy math that demonstrated pretty conclusively that search as a whole is not a process that submits to ordinary statistical analysis - standard deviations and all that jazz. In other words, AI as it's normally conceived of is not going to take over here, because the model keeps getting surprised too often. That may be why we hear about Google-stlye analysis working on "no modeling at all [webmasterworld.com]".
There was an interview with a top Google employee earlier this year where he said that they have a machine-learning version of the algo running in parallel with the live one that gets intensive, eyes-on checking. And they don't feel that the machine learning version is up to the job, as of now, so it doesn't run live.
I've been wondering recently if all this cycling we hear reported might be an artifact of trying to integrate some machine learning into the live SERPs. No one's likely to tell us, of course, but it's fun for me to think about anyway - I can't stop myself, in fact.