| 7:06 am on Oct 28, 2006 (gmt 0)|
There should only be one H1 on any page.
| 7:56 am on Oct 28, 2006 (gmt 0)|
If all that information is duplicated throughout the site, a quick fix would be to place it on just one document and then insert that document with an iframe. That wouldtake away all the duplicating.
And yes, BDW's advice is good IMO -- go with anything but an H1 element if it's for all of those headings. I would probably just create a CSS class.
| 7:46 pm on Oct 28, 2006 (gmt 0)|
Is it me or does anyone else find that duplicate content filter overly aggressive, and lacking intelligence as a process?
| 9:34 pm on Oct 28, 2006 (gmt 0)|
Maybe, but if it gets 90% of the spam that Google knows about in one hammer blow, then to them it is already effective.
| 10:01 pm on Oct 28, 2006 (gmt 0)|
This filter is far from perfect
You can have a situation where you have a page about "blue widgets" with variations of blue widgets off it. Meanwhile another page about some other kind of widget may have a link to that blue widget page on it and google will rank that page for the keyword "blue widget" over the dedicated page.
Ive seen this a number of times on various sites
| 10:29 pm on Oct 28, 2006 (gmt 0)|
|Maybe, but if it gets 90% of the spam that Google knows about in one hammer blow, then to them it is already effective. |
Sure, and if it removes listings for duplicate content that wasn't intended to be spam, that's probably fine with Google, too. Think of the filter as a "clutter filter," not just a spam filter.
| 10:59 pm on Oct 28, 2006 (gmt 0)|
Although the dupe content filtering can frustrate me when I am responsible for ranking a site, as a searcher/end-user I think the result is a net improvement over a year or two ago. On most searches I do, there is definitely a greater variety of information available from the first page of results -- it's just not always MY information :(
| 1:14 am on Oct 29, 2006 (gmt 0)|
So then you would agree, that if we syndicate a website in blogs, articles, classified listings and on a network of sites we control, all linking back to a different network or specific site, that we would be able to remove any site from the SERPS, effectively destroying our competitors rankings possibly forever?
Which would mean that Google's statement of, there is nothing a competitor can do to hurt your site is nothing more than a farce.
| 5:49 am on Oct 29, 2006 (gmt 0)|
Optimist, a duplicate-content filter isn't the same as a penalty.
| 6:21 am on Oct 29, 2006 (gmt 0)|
The effect can be the same if the site no loner ranks because the content is on another website.
| 3:06 am on Oct 30, 2006 (gmt 0)|
|Is it me or does anyone else find that duplicate content filter overly aggressive, and lacking intelligence as a process? |
I agree, you summed up my feelings well.
One thing that bugs me with google is this: Google does not like "over optimizers" (some people with knowledge about SEO), so google likes webmasters who don't try to edit SERPs by knowing about SEO.
If google prefers the uneducated SEO-wise webmaster thats all well, but this webmaster is not going to know the other pits falls, such as duplicate content, and does not know it will hurt them.
Its as if no one can win, or maybe just the guys who know a lot about SEO, but don't use too much of it. I guess a part of being a great SEO'er is knowing how much is right and how much is too much.
Google does not make it easy for us. Its not fair how someone with a great site is penalized for not knowing about SEO, and someone else is penalized for using too much SEO.
I know google has to deal with span etc but sheez it's making it very hard.
| 3:38 am on Oct 30, 2006 (gmt 0)|
The user doesn't care if the guy who knows SEO is doing better than the guy who doesn't, or vice versa. As long as the user finds relevant results without two or three copies of the same text in a search for "widgets" or "slobovia" or "ingrown toenails," the user is a winner--and so is Google. Whether the result is "fair" for you or me or the Webmaster next door is beside the point.
| 6:26 am on Oct 30, 2006 (gmt 0)|
That would all be fine if the filter would actually be able to determine the original creator of the content at least 9X% of the time, then allow for webmasters to submit Registered Copyrights to claim the content, this could save hours of their legal team dealing with DMCA complaints.
Giving some lenience for large sites with multiple products that use similar dynamic content on their pages yet each page is for a REAL different product. It would also be nice if the canonical issue was not a trigger for duplicate content.
A third point can be something like a difference in currency by clicking a button, causes a dynamic URL and then a new page is seen and walla duplicate because the only difference is the pricing.
The filter immediately feels a site is guilty of page spam by duplication, so lets get rid of these pages and maybe even supplemental them.
So where's the Webmaster Tool for telling a site or content they have been duplicated? And how busy should we make their legal team with DMCA complaints if the content should NOT be on other sites. I feel aside from a DMCA complaint, there is little protection from Google for duplicates and infringers in some industries. While it may not be their concern, if they are going to hurt, sites and pages for duplicates, they need to be more thorough with the parameters they use so everyone has a fair shot.