Forum Moderators: Robert Charlton & goodroi
What I have noticed is that since my website relies on user-submitted content ( as well as much content that is original) much of the content has been tagged as supplemental.
Most of those pages usually used to outrank other sites with the exact content, but since then my site pages and content has been flassged.
My question is - is there a relation between pages going supplemental because of duplicate content and the ranking of the index of a site (which is unique text and content)
Is there a threshold where if too much of a percentage of website content is duplicate content then g applied some kind of penatly to index page ranking?
Any help on this is appreciated.
As with many things with the almighty Goo, there is much we ‘don’t know’. Then again, I don’t know what MacDonald’s secret sauce is made of, nor the source code for windows and I manage to co-exist with them.
From ‘conventional wisdom’ has been debated I’d say in the area of a 40-60% duplicate content would get a plagiarism/scraping flagging. What can be an equally important factor is timing. Historical data relational elements play a roll in weighting and as such deserve consideration.
In English? Don’t post an article somewhere and later stick it on your site. Don’t think you can get away with ‘scraping’. I am, as many are, unsure what has changed since the ‘Information retrieval based on historical data abstract’ but there is evidence correlating to historical document indexing and weighting that covers these areas.
As far as, “is there a relation between pages going supplemental because of duplicate content and the ranking of the index of a site (which is unique text and content)” – All things effect rankings. Rankings are the end product of the algos. So certainly if you have ‘gone supplemental’ or are being penalized on one page, it will affect the balance of the domain and it’s relatives (ie; sub domains)