homepage Welcome to WebmasterWorld Guest from 54.204.94.228
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
DMCA... and how to treat copied content for Panda?
webstuck

5+ Year Member



 
Msg#: 4493555 posted 12:42 am on Sep 11, 2012 (gmt 0)

I'm new to filing DMCA requests and had a question for all of you experts.

In an effort to recover a site from panda, we've run our site through copyscape and have been finding all of the other sites that have stolen our content over the years. Some sites have just taken snippets, while some have taken full articles with images and everything. Some of these provide links back to us, though most don't. Which of these should I be filing DMCA requests against?

Also, what about ehow.com? Some of their pages have copied almost word for word some of our articles. Some of these articles have links to us and some do not. Should I be filing DMCA's against them (or other larger sites like ehow) or does that not seem to help at all? What about the articles that have copied, but provide a link? Is it beneficial to just leave it and have the link back? Or what if they mention us as a reference, but do not provide a link?

Thanks!

 

diberry

WebmasterWorld Senior Member



 
Msg#: 4493555 posted 4:51 am on Sep 11, 2012 (gmt 0)

If it's just a snippet and maybe one image with a link that makes it clear people should click through for the source - the way LifeHacker features sites - then I don't think a DCMA request will be successful. That practice is just too common, and I believe it's all considered "fair use" (you can Wiki that for more info).

If they're using snippets WITHOUT links, a DCMA may work. I haven't really had that situation myself.

If they're just copying full articles with or without images, OR using several images, the DCMA will work very well. I issue these for a few of my pages that people keep stealing outright periodically, and Google wipes them out of the index within days (two weeks, tops).

If they steal a whole page but leave a link to me, I still DCMA them. That link is never going to send you traffic or constitute a quality inbound. They're just making money off your content.

As for ehow, I'd say give it a shot - you'll know within days whether it worked. Rewording is, sadly, enough to get around the DCMA, but how much rewording? It may be that Google requires it to be exactly the same language before they'll do the DCMA, but there may be wiggle room if it's only slightly reworded. I think it's worth checking into. I'd also love to hear your results!

zeus

WebmasterWorld Senior Member zeus us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4493555 posted 1:26 pm on Sep 11, 2012 (gmt 0)

Panda has nothing to do with duplicated content.

tedster

WebmasterWorld Senior Member tedster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4493555 posted 3:46 pm on Sep 11, 2012 (gmt 0)

I'm not convinced that Panda has "nothing" to do with duplicate content - it is a very complex algorithm.

Remember when Panda 1.0 was originally called the "Farmer Update"? The step that Google took the month before was called the Scraper Update - something considered necessary to make Farmer/Panda work out properly.

Rosalind

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 4493555 posted 4:22 pm on Sep 11, 2012 (gmt 0)

Ehow links are nofollow, they're worthless. There's also a meta tag you can use to stop them scraping content, I have just implemented it myself, so I've yet to see if it's any use.

<meta name="ehow" content="noclip" />

zeus

WebmasterWorld Senior Member zeus us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4493555 posted 7:53 pm on Sep 11, 2012 (gmt 0)

tedster - the first thing I worked on was duplicated content, to beat Panda, but then Matt Cuts said that Panda has nothing to do with duplicated content.

claaarky

5+ Year Member



 
Msg#: 4493555 posted 8:17 pm on Sep 11, 2012 (gmt 0)

I'm fairly confident duplicate content (as in content that also exists on other sites) is a major factor in Panda, but having duplicate content doesn't necessarily mean you'll suffer from Panda.

Getting scraped content taken down and/or taking ownership is essential for panda sufferers I believe. Especially if your content is being used by well presented, well known sites like ehow.

tedster

WebmasterWorld Senior Member tedster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4493555 posted 8:49 pm on Sep 11, 2012 (gmt 0)

the first thing I worked on was duplicated content, to beat Panda, but then Matt Cuts said that Panda has nothing to do with duplicated content.

I know - but that doesn't mean something isn't going on that Matt couldn't foresee. For example, he said there was no Sandbox at first, but later discovered a complex interaction in the algorithm that was creating the Sandbox effect.

In this case, I know someone who did a lot of Panda analysis and they regularly found a higher level of scraped content for Pandalyzed sites than for sites that weren't affected. That is just a correlation, of course, and not a proof of cause and effect. But it was enough for me to take a look and then take some DMCA action as well as to rebuild trust and authority for an affected site. And, it did recover.

Again, I can't PROVE anything here - it's only one example and something else might have done the trick. But with any complex system, like Panda, people can't always predict all the effects that will emerge from the complexity. It seems certain that Panda hurt more than Content Farms, that's for sure - whether intended or not.

Sgt_Kickaxe

WebmasterWorld Senior Member sgt_kickaxe us a WebmasterWorld Top Contributor of All Time



 
Msg#: 4493555 posted 2:53 am on Sep 12, 2012 (gmt 0)

Some companies, especially on Youtube, are using automated bots to crawl sites/videos and find duplicate content which then cause the bot to report the url to Google.

You can catch a bunch of them by downloading a wikimedia file of a nasa rocket launch for example and load it to your youtube account. The wikimedia/nasa video is not copyrightable on several levels. The bots from news agencies will then trip all over the footage wrongfully thinking they are the copyright owners because they published parts of the videos earlier in their news reports.

Unfortunately that means using any part of even allowed public content may yield DMCA complaints against you, and that's not a good thing.

I'd recommend doing it manually and target repeat offenders and the most blatant companies(yes, companies will scrape more than individuals nowadays). Slap your site logo on any image/video/flash files.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved