tedster - 12:04 am on Jul 11, 2012 (gmt 0)
Thanks for describing your situation so completely. It really helps us define what you are seeing - and it could be the sign of a rather subtle point.
To be clear: the re-written articles would all be considered "unique" or "original". They aren't copy and paste jobs.
This puts me in mind of a couple things. First, as far back as 3-4 years, there have been comments from various Google people that exact match is not necessary for content to be considered "duplicate", at least to the degree that there could be a rankings impact. Google has invested a lot of resources in developing sophisticated semantics processing - to that degree that even a few years back, content that was only 80% "duplicate" could still be tagged as such. Why they still get fooled by outright scraping is a mystery, but in each case it does seem to be relatively short term.
Second, in a recent interview with Eric Enge, Matt opened with a discussion of content that was relatively similar or derivative:
While they're not duplicates they bring nothing new to the table. It's not that there's anything wrong with what these people have done, but they should not expect this type of content to rank.
Google would seek to detect that there is no real differentiation between these results and show only one of them so we could offer users different types of sites in the other search results...
...if Jane is just churning out 500 words about a topic where she doesn't have any background, experience or expertise, a searcher might not be as interested in her opinion.
Matt seems to be pointing to an algorithm component that we don't have a name for - something they are doing that might well explain the results Tallon is describing above.