mhansen - 7:02 pm on Aug 9, 2010 (gmt 0)
2) Why does Google (any SE) not establish some kind of trust mechanism that means that if in doubt site A (established, good site, running for years, often updated) out ranks - site Z - new scrapper site hosted in China - when there is any duplicate content.
I really think that part of the problem we face as Webmasters today, is that almost ALL of our sites DO HAVE HISTORY with Google, and regardless of whether we admit or not, what we did yesterday "to rank better" that was squeaky clean and cutting edge, is now considered questionable and often penalized, putting us at a disadvantage to begin with. (Link directories, article marketing, long tail targeting, link exchanges, etc)
The REAL ISSUE with scraper sites are sites that are +90 days old and still outrank you, or are even in the engine for that matter! If you have those kinds of aged scrapers outranking you in serp's, you need to fix the inhouse or onsite issues as well managing the offsite scraped content.
If Google continues to keep scrapers in their serps after they KNOW the content is copied, and lets face it, they DO know it... well then they suck! There, I said it! :-)
Just my .02