TheMadScientist - 7:03 pm on Feb 28, 2011 (gmt 0)
The content on my site is entirely unique.
I don't know your site, but...
I'm absolutely NOT trying to pick on anyone or say their site is 'spammy' - Not at All.
IMO There's more than 'one thing' going on...
Here are some thoughts:
1.) Throw a query dependent reading level in to the rankings, like QDF, only different (QDRL).
2.) Add an 'origination & similarity' filter.
3.) Do some 'other things' with document classification and behavior.
The content on your site (pages affected) may not fit a QDRL (query determines reading level) type filter or assessment.
The content on your site may be totally unique, but you might not have been the 1st of say 10 to publish (or more importantly: get spidered) something similar (topically and informationally) out of all the pages GoogleBot has spidered.
The content (documents affected) may fit a classifier pattern of a 'spammy site' based on phrase frequency, related phrases, and 'natural language' factors. +(User Behavior and/or Links)?
The point is: imo fixing the rankings of the (sites) pages affected by this change is going to have to be on a site-by-site and document-by-document basis, after we've seen a complete settle-down in the results for a bit to make sure Google-side adjustments aren't still rolling.