diberry - 3:30 pm on Jun 30, 2012 (gmt 0)
So, if google relies on user metrics, while it may be evaluating some onpage aspects, what it's really doing is actually evaluating ITS OWN ABILITY TO MATCH USER INTENT to page content.
Very well put.
In Google's case at least, I think they are intentionally sacrificing SERP quality in the short term so they can learn what they need to learn for the long term. They need to replace dependence on all the ranking factors that can be easily manipulated.
And this makes good sense. But to make this a success, isn't Google eventually going to need a lot more UMs than they can collect from Chrome or buy from ISPs? I mean, if they had access to full, un-fudged analytics from every site, they could easily tell when the bounce rate is their own fault or the site's. But even if they used the data from Google Analytics, that would be incomplete and subject to manipulation. So how do they reckon they'll ever have a complete enough set of UMs on enough sites to produce quality SERPs this way? Forgive me if this has been discussed ad nauseum elsewhere and I just didn't see it.