|How to tell Google that I should rank instead of them? |
Submit a DMCA and get the stolen content removed from Google's index.
1. A DMCA is likely no longer an option. The OP sold his article to the thief for the price of a do-follow link from a scraper site. The thief now has a legimitate right to use the content. Filing a false DMCA report carries significant penalties. At this point the OP may need to speak with an attorney if they seek a legal remedy.
2. The only way to outrank scrapers is to obtain better inbound links.
3. Sometimes scrapers rank for (longtail) snippets from an article but not attract traffic for actual/real-world keyword phrases. The scraper may not actually be outranking the OP and the search engine is doing what it's supposed to be doing, returning results for an oddball query.
Rewrite the article.
Good idea. Add more content and multimedia and make your version better than the stolen version.
|The OP sold his article to the thief |
@martinibuster is right. I think you might struggle with DMCA now that you've "entered into an agreement" with a scraper site.
What you can do is try to identify a dozen other sources the scraper steals content from and get in touch with the webmasters of those sites.
If they all submit DMCAs, there's always a chance the site is pulled down altogether.
I would also suggest using the google author or google publisher tags... see if you can get them to use it on their copy of your page as well.
Just to be a stickler for the little details, oldfriend, is the keyword phrase you are referring to your site's primary keyword phrase?
|I still see them at No. 5 spot for the keyword phrase and my site is not in the SERPs at ALL |
To see how your individual article is ranking against the copied url you need to do a copy paste of various parts of the article such as the exact title if the scraper copied that too or for a snippet of the text wrapped in quotation marks. You can't do a keyword search and see where your index page ranks against theirs when the problem is as you describe, you need to compare page by page and I didn't get the impression that's what you did. Forgive me if I am wrong.
|I would also suggest using the google author or google publisher tags... see if you can get them to use it on their copy of your page as well. |
If Google can't figure out the source of the information, then adding more Google BS probably won't fix the issue.
If you can affirmatively state that adding author or publisher tags will solve the issue, then so state. Otherwise, you're just saying drink the koolaid.
I am ranking for various sentences from the copied article and I mostly are the first result along with the scraper being lower than me.
The keyword phrase is not my primary because I write evergreen content and my every article targets different keywords. Someone suggested to rewrite the article but it is already well written and why I should spend more time on it when Google can't figure it out itself. Maybe googlebot haven't checked that article on the scraper page and didn't updated it's index?
|I am ranking for various sentences from the copied article... |
That's what I was referring to, random sentences/snippets. That doesn't matter so much as coherent phrases that a searcher may use. As long as you can still rank for those then the search engines are on top of identifying who the original author is.
Remember, whoever has the most internal PageRank obtains the credit for the contest. Not certain how much importance being first to publish is given. Whatever the case is, whoever wins obtains all the PageRank from the scrapers. Double-edged. Could work against you. Which is why in my original post I advised you to obtain more credibility than the other site with more and better inbound links.
Don't get caught with unnatural links. That'll worsen your situation. You have what you say is expert content. Good. Now go out and tell others about it so they link to it.
OldFriend, I'm having the same issue as you on an old site.
I just happen to check one of my page titles to see where the article was ranking and i noticed a title that looking identical to mine (because it was) It turned out it was a complete copy of my sites content. This has been going on for a while I just happen to catch it now. Unlike old friend I'm fighting to take the pages done and hopefully the site since they have basically script a few sites content.
Here is how im searching for my articles and finding the content.
Use a quoted search
"Your Title Here to get the best results"
then you might have to click the bottom link which says show X amount of related pages.
Use a quoted search with your domain name minus .com
"Your Title Here to get the best results" domainname
What actually confuses me is this site has done everything your not suppose to do or at least what these updates are suppose to block
-They bought an expired domain name with incoming links to a totally different site theme.
- They scraped complete articles with same titles
*From multiple sites
-They rank higher for the "quoted title search over the original article"
- What Is even worse its not like these articles were just published articles,these are a few years old. And we are getting out ranked and in most cases looks like discredit for them
I'm looking into some of the ideas tedster mentioned.
What is interesting is a few years ago i set up an alert that posts to twitter for a forum, but not the articles.
Which in a sense gives you a confirmation from them saying this article was published on x date from this site.
We also publish an rss feed, which might be an issue i think tedster mentioned in another post.
@oldfriend do you have rss or xml feeds people can use?
|I advised you to obtain more credibility than the other site with more and better inbound links. |
Don't get caught with unnatural links.
LOL, what exactly is the advice here? I can't believe you actually wrote these two sentences one after another :)
@OP - the scrape does not outrank you because of your article. And you did the right thing - got a dofollow link and admission by the scraper "in the SE's eyes" that the article belongs to you. This is the best and most beneficial way to deal with this.
If your article is ranking nowhere - you have been dealt a penalty, period. It could be any of the millions of penalties, but since your article has 164 links from other websites - you most likely got an unnatural links penalty.
|I'm looking into some of the ideas tedster mentioned. |
Regarding tedster's advice on this, and he was perhaps the most active here in tackling this particular problem, here's his summary of a successful combination of approaches he used, in this June 2013 thread....
Put Up or Shut Up - Share your best tip for better Google rankings
|Many here have talked about being outranked by scraper sites - even on sites that have a long history of ranking very well. I worked with such a site that ran into the scraper issue when Panda 1.0 first rolled out. |
What to do? It seemed to me that somehow, the original site had lost its authority despite a powerful backlink profile. How to fix that? What we did was:
1. Use PuSH (PubSubHubub) technology [code.google.com] to let Google know immediately when new content is published.
2. Delay the existing RSS feed until Google had grabbed the new content.
3. Set up authorship authentication through Google+.
Note that this site uses a stable of ten well respected writers and it seemed that this could only help. It did. Within a few weeks, rankings came back and then just began to climb and climb.
|@atlrus If your article is ranking nowhere - you have been dealt a penalty, |
That's actually the best information I have found on this topic so far. Now why did his site get a hit?
Here is something I'm looking into as to why scrappers are ranking over us:
I think many of us that don't pay attention to cheating the serps are getting hammered because issue the serps have created themselves:
For example recently "g" stated they are dropping the rss reader program, I have to as why? Originally it was designed for people to get updates on a browser, but people took feeds and pushed them onto their sites to get updates from larger sites.
This was win-win smaller site gets updated added value content, larger site gets more visibility to their feeds.
1)The problem is Rss feeds create hundreds of pages of duplicate content, using the same title. And first sentences of content.
*I have offered a feed for years, and will continue to do so, because it offers another source of traffic outside of the serps.
2)Years ago people like myself created a "link to us page" asking for links. The true way to earn a link.
The google problem it created is we now have many sites linking to us, with our selected terms in those link to us links. So we get hit. I discovered my two main keywords are sitting #1 page 2. (a penalty I suppose?)
So how does dropping Google Reader affect that?
feeds have turned into a can of worms. Sites that post rss feeds onto the page via php that then ads the links from the feeds, creates instant links to any new pages. I'm willing to bet a lot of the links to this forum come from people posting that rss feed to their pages. So any new forum post, instantly has hundreds if not thousands of links