Ok so I know duplicate content is penalized. Lets say I have site A and site B on two different urls , two different hosts, two different IP addresses. I have an article on A and on site B I have the same article with a few extra words or a few words substituted.
Now does a search engine bot pick up on this? Or is duplicate content simply a word for word manifest?
These were granted over two years ago, and filed with the USPTO more than five years ago, so it is possible that if Google ever used them that they may have moved on since then. Regardless, they both show methods that appear like they would be effective.
Hi my search engine notes duplicate content as it indexes a website, the results show where the content is duplicated and how close the match is. The spider does this as the results come back to the search engine. We often get pages of duplicate results, some are where a site has simlar content (75% content match) but some are just duplicate content hosted on a related website (Owners name or details the same)quite often the domain names are simlar or versions of the same domain like a dot net and a dot com site. One webmaster who shall remain nameless registered ten domains with simlar names and they were all hosted on the same server, the content was the same down to the files sizes and dates, the only difference was the links pages (They all pointed to themselves) YAWN! Thing is that if we were to index them all they would still score the same and show in the same results so does he expect the visitor to go to all his sites or just some of them?