Forum Moderators: Robert Charlton & goodroi
How does Google tell the difference between legitimate article usage and scraper sites?
Or do they?
So I am curious about this scraper site problem. I don't know how they could tell the difference.
I've never had a problem with duplication of content. And my suspicion is that all of those one-way IB links from my articles on other sites is helping me in the SERPS.
I have been using (on my banned site) the meta description from various links as the description. But I don't use the entire meta if it exceeds 256 characters, I will edit long descriptions. On many pages, but not all I will add my own comment to the link's own description. So no more than 256 characters from any one site appears on any of my pages. Actually my pages are superior to any Google result page for that particular keyword--that may be the problem.
Now I've always felt this is fair use because the sites themselves provide these descriptions for others to use. But no, it seems it is dup content. I have never received a complaint from any website for using their description and I was getting 8,000 visitors/day before the ban.
Now I am STILL getting a lot of return and unreferred visitors in spite of the ban so I can't eliminate these thousands of descriptions and make the site useless just to please the picky Google engineers.
I am reconstituting the site, but I don't expect to be back in GOOG with any substantial number of pages for a year or more... Fortunately I don't really need all those thousands of extra AdSense dollars (it was nice though...)