I have a great idea for Google’s algorithm architects. Since there are over 65,000 scraper sites out there that have copied contents from my web site alone, and probably yours as well, and they rank higher than you for searches of your own content, I have a great suggestion for Google’s algorithm.
Maybe Matt Cutts can pass this suggestion over the wall to his coworkers:
BASE THE RANKING ON AGE, goof balls!
They don’t seem to do that now!
Use an automatic WHOIS lookup to see which sites have been around the longest.
If you come across 2 sites with duplicate content, give the ranking to the site who has been online the longest, and delete the 2nd web site.
Period. End of story.
Further enhancement: Delete any site that looks like search engine site.
Further enhancement: Delete any site with a ton of keywords stuffed in the bottom of their page.
Crawl Jeff’s Original bridal tips and diamond buying guide site
Crawl scammer’s scraper site
Find Duplicate Content (which scraper site stole from our site)
Perform WHOIS lookup on Both sites
Jeff’s Site Online: 8 years Scammer site: 8 days
Result: Scammer site not in index, and URL sandboxed.
Jeff’s Site Rank=1, PR=7.
End of Algorithm