| 4:38 pm on May 8, 2006 (gmt 0)|
Please define "original content". There must be 100s of copies of say the Book of Genesis, or Shakespeare's Hamlet out there. Which one is the original site? (that's a rethorical question). Don't believe the hype.
| 4:40 pm on May 8, 2006 (gmt 0)|
There is no penalty. There is a filter.
| 5:40 pm on May 8, 2006 (gmt 0)|
the question is then who is "filtered" and who is not?
| 5:42 pm on May 8, 2006 (gmt 0)|
this is a good question. At time it appears the filter is heavier internally in a site and its pages than it is with duplicate domains/sites....
| 7:24 pm on May 8, 2006 (gmt 0)|
I would concur soapy. I see only heavy dupe filters internally on sites, and not between competing sites.
I see the exact same article rank 1,2,3,4 and 5 in the genre I am in in Google, for a medium to heavy comp keyword phrase.
| 7:36 pm on May 8, 2006 (gmt 0)|
There is a filter all right, and it can drop many pages into the Supplemental Results, or back to URL-only entries, or right out of the index.
| 7:48 pm on May 8, 2006 (gmt 0)|
Whoever duplicates the content the most seems to be the winner. Usually these are companies in non-US locations that infringe on copyrights as many non-us companies do not respect copyrights nor can they write proper English.
If the content is syndicated by the copiers and linked back to their sites, it can eliminate the original owner.
The filters are pretty archaic, meaning they really have poor principles to go by and usually loose site of the original owners of the content that Archive.org usually has accurately.
There is no way to protect your content in Google since it uses automated penalties.
This seems to also be the Anti-Article filter, of course, since Matt Cutts posted on his blog that articles are good links, there's a multitude of low quality content and article spam with tons of content that is questionable as to who the owner is.
Solution: Devalue Article Websites, use archive.org as a priority if the content is old. No solution for newer content.
| 1:30 am on May 9, 2006 (gmt 0)|
I see this problem: if someone has good site (good rankings, good contents etc), he/she makes copies of his/her site to many another domain names, and lets imagine: when you do search for specific keywords, in first SERP are only his/her sites, competitors have no chance to get visitors from SEs. This is not only imagining, my main competitor has 4 sites with different domain names and almost same contents in first SERP and two in second SERP when searching for our keywords. How difficult to me!
| 2:00 am on May 9, 2006 (gmt 0)|
It should be regarded as duplicate content, but it isn't. I see that all the time. It goes far beyond four results in the first ten - you'll have 30 of the first 40 results all the same content for certain searches. It's the same in both G and Y.
|Whoever duplicates the content the most seems to be the winner. Usually these are companies in non-US locations that infringe on copyrights as many non-us companies do not respect copyrights nor can they write proper English. |
No scrapers or spammers in the US? That's a relief. And they speak and write proper English there, do they? You learn something new everyday...