indyank - 5:41 pm on Jun 19, 2011 (gmt 0)
What i meant was i was trying to pass off as unique articles copied from elsewhere, with about 15% of the article's content rewritten to make it look like a totally new article.
So you added 15% unique content to articles scraped from other places and google considers them as unique now. that is interesting.None of the sites I deal with and affected by panda have that kind of duplicate content.While you could pass of articles with just 15% (or 15/115%) unique content, i am struggling with sites that are far more unique!
i have programmed my site so thin pages get a link to them which is hidden from google, in addition to adding no index no follow meta tags to them, so they are neither crawled again, and also removed.
So you are hiding those links to googlebot and still google is fine!
Now I understand how google should be treated going forward.