claaarky - 11:22 am on Jan 13, 2013 (gmt 0)
TMS, I think it is possible to get reliable data from a third of the worlds Internet users..
Surveys are routinely conducted on a representative sample and this has been found to be a reliable indication of what everyone thinks. Apply this to Chrome and in my view it's highly likely with a sample of a third of all Internet users in the world, you could come up with very representative metrics about every site out there.
Add to that the fact that search engineers know all about how to turn small samples of data into something meaningful by combining it with other types of similar data. There was a post here with a link to an article by an ex-google engineer who explained how they do it.
My view is that Chrome data is used in Panda which, as we know, is an 'add on' to the main algo. I think the main algo does all the usual relevance calculations and user metrics then come into play with Panda. I think it's more likely that google use data collected from other means to verify user metric data collected from Chrome.
I really don't see how anything man can manufacture can be more accurate at determining what people like than actual human feedback in the form of user metrics.
Everything I see in google right now tells me rankings are heavily influenced by the way humans react to websites. There are so many factors that humans assess every day in a second when using the web which take into account past experience, changing moods/preferences, trust, quality, etc. and there is no way a computerised process could keep up with that. However, user metrics reflect it every day. Understand what those numbers mean and you can tell a lot about what people think about a site overall.
Compare that to other similar sites and you've got a pretty good quality based scoring system you can overlay onto your main algo.