Welcome to WebmasterWorld Guest from 220.127.116.11
If you see a scraper URL outranking
the original source of content in Google, please tell us about it:
[edited by: Robert_Charlton at 10:05 pm (utc) on Feb 27, 2014]
[edit reason] added quote and sorta cheated to fix https links [/edit]
Authority and all that. spammy.biz registered 3 months ago scraping and outranking authoritydomain.com registered in 1998 is a typical situation. Not just highlighting age here, but many other factors that Google strangely overlook and ignore (or more bizarrely, can't recognise).There's a problem with domain names that a lot of people, including those in Google, miss. The registration dates are no longer accurate as an indication of when the domain was last registered. Domains are auctioned and transferred before they are deleted and that means that they don't drop and effectively keep their original registration date.
Google Authorship is in its infancy. Check back in a year or two or three.People still have amazing faith in the ability of the people in Google. :) They cannot even solve what I consider simple problems (detecting/dealing with hacked/compromised sites with dodgy links) so I guess I'm a bit more cynical about how they will deal with scrapers.
[edited by: brotherhood_of_LAN at 5:10 am (utc) on Mar 2, 2014]
Confirm your site is following our Webmaster Guidelines and is not affected by manual actions.
mcc, are you going to share your answer to their problem or perhaps monetise it and offer it that way? Perhaps if they're trying and failing they could solve the problem by giving you lots of cash in exchange for the answer.Now that would be a display of real genius by Google but I fear that they suffer from the "Not Developed Here" syndrome. :) This might sound a bit Fermat's Last Theorem but I think that this is not the thread for the explanation.
Does this mean that if you create great original content, but don't follow the guidelines and/or get a manual penalty, then even if Google knows that it's your work, they still might feel justified in letting scrapers get the benefits?
Assuming it is feasible from a cost stand point - how does that afford any protection against existing scrapers?
It might offer some legal validity as evidence against future scrapers, but not past ones.
Another tip, is as a site gets larger we move from shared to dedicated ips.. My scrapper is on a shared ip with 30+ sites. My site is on 1 server with 1 ip for the site. REALLY? you can't tell who is stealing whos content?