econman - 7:32 pm on Feb 27, 2011 (gmt 0)
...the new algo has a heavy weighting for identifying similar content, which the algo takes as the antithesis of original, unique content
Interesting hypothesis. That would be consistent with the drop in rankings of a site like ezinearticles.
Is anyone familiar enough with some of the other major sites where data is publicly available to judge how well this hypothesis holds up when compared to those sites?
For instance, there has been discussion in the thread about hubpages compared to squidoo, and ehow compared to mahalo.
Not that I think this one factor alone explains the algo changes. I'm with Tedster in thinking there are probably many different things going into the recipe. The basic change is a focus on quality rather than exclusively focusing on relevance.
To me, the most interesting questions are what does Google mean by "low quality," what data (recipe ingredients) are they using to detect it, and to what extent is "low quality" being measured or evaluated on a site-wide basis, rather than looking at each document in isolation?