diberry - 2:16 pm on Sep 30, 2012 (gmt 0)
I'm with Elsmarc. We talked in great detail after Penguin about user metrics, and the problem is getting the algo to understand what the metrics mean. For example, high bounce or exit rate often means the site was disappointing. But it can also mean the site gave the user precisely what they wanted very quickly - like, the overall rating on a product as reviewed by users.
If anyone COULD build an algo that could take ALL the needed variations into account AND recognize which type of site is which and which type of query is which, it would be Google. But can it be done? I don't think so - not at this time, at least.
--How people search is still evolving. Most people are in their infancy with search. Not everything they do indicates something about the SERPs - sometimes it just indicates their own quirks or lack of understanding of how search works.
--Site quality is even more open to interpretation than it used to be. Many years ago, only geeks were interested in the internet, and geeks tend to have a fairly simple definition of quality derived from academic standards. For example, geeks would reject Ehow, but now they're outnumbered by non-geek users who don't know about academic standards and think ehow must be good (although, as we've speculated, there's a feedback: many people see Ehow at the top of the SERPs and assume THAT means it's good and don't even try to evaluate it themselves).
I submit as evidence of this Singhal's assertion that people were searching more with Google, therefore Google was rockin'. We all instantly thought "...or they're not finding what they want, but instead of trying another engine they're trying another query, or six." Maybe that was just spin - I'm sure both possibilities occurred to Google people - but it shows how differently you can interpret the same metric.