diberry - 3:21 pm on Oct 9, 2012 (gmt 0)
We started collecting data on users interacting with a page in any way. We knew whether they scrolled to the end of the article, clicked to other pages, click on ads, moved a map, played a video, etc. When users didn't do any of these things we assumed they used the back button. We found huge correlation between this metric and the rankings of the pages for their targeted keywords.
We also found huge correlation between the amount of content on the page and bounce back rate. When there was minimal content (just a product name, and a bunch of "be the first to...") the bounce back rate could be 90%. When we had a full complement of content (reviews, prices, places to buy, photos, videos, professional review links) the bounce back rate could be as low as 15%.
We concluded that either Google had a very sophisticated algorithm to measure the amount of content on a page, or that they were doing a very straightforward measure of bounce-back and using that heavily to rank web pages for queries.
Great analysis! That's really useful information - thank you for sharing.
I think there's two separate issues here:-
2) The main algo
And Penguin, too. For all we know, both the zoo animals could be using a data source (or sources) that's not made available to the main algo.
Hmmm. I wonder if this niche would also include an ecommerce site and an informational site(me) in the same industry?
That's just the sort of thing I'd love to figure out. If Google does compare sites it groups into niches, how does it define the niches? Possibly not at all the way we do, or even the way searchers seem to.
BTW, do we actually have any indications that Google is comparing sites within niches? It makes total sense, but I'm just wondering if there's any way we can back up this theory with data or something Google has said.