engine - 3:24 pm on Jun 9, 2008 (gmt 0) There are a couple of scenarios in the piece. Generally, we can differentiate between two major scenarios for issues related to duplicate content:
In an interesting article, Sven Naumann from Google's Search Quality Team, helps clarify the issues surrounding duplicate content.
How Google Addresses Duplicate Content Due To Scrapers [googlewebmastercentral.blogspot.com]
Before diving in, I'd like to briefly touch on a concern webmasters often voice: in most cases a webmaster has no influence on third parties that scrape and redistribute content without the webmaster's consent. We realize that this is not the fault of the affected webmaster, which in turn means that identical content showing up on several sites in itself is not inherently regarded as a violation of our webmaster guidelines. This simply leads to further processes with the intent of determining the original source of the content—something Google is quite good at, as in most cases the original content can be correctly identified, resulting in no negative effects for the site that originated the content.
There are a couple of scenarios in the piece.
Generally, we can differentiate between two major scenarios for issues related to duplicate content: