Forum Moderators: Robert Charlton & goodroi
I still made sure the user knew that they were still part of that category but I did remove the description. The next pages will always have unique content so why add the description from the main page there.
How about petitioning for them to either give us precise definitions or provide "how-to-do-it-without-incurring-the-wrath" examples for common situations like photo galleries where one photo can end up in several of them, multiple post categories in blogs, etc?
<h2>My Category</h2>
<p>This is a description of my category</p>
<h2 class="category">My Category</h2>
<p class="description">This is a description of my category</p>
<h2 class="cat">My Category</h2>
<p class="description">This is a description of my category</p>
The first one is a h2/p tag combination. Sure you know its a category and description, but how are they supposed to know this or any h2/p combination for that matter?
In the second case, do you really believe that Google can perfectly derive semantic meaning based on class/id alone?
In the third case, even Google did make some decision about id/class meaning, how do you know they won't analyze it to be a description of a cat?
So can you blame them? Its not their fault. Its not yours. Its the markup language and agreed upon understanding of what you are trying to express is what is wrong. HTML isn't it.
If you cannot alter this [ which would defeat the purpose of pagination ] , then G will likely pick the strongest pages aka PR , IBL links .
[edited by: Whitey at 12:13 pm (utc) on July 11, 2007]
I find it appalling that we are forced to sacrifice user experience in order to satisfy search engines' ambiguous (and, to the best of my knowledge, unpublished) definitions of "duplicate content".
You're not, so don't get riled!
The more content a page has on it that is reproduced on other pages in that site or others, and the less original content it has on it, the less important it will seem to Google to feature that page or feature it prominently. With limited space, they have to try to determine 'quality' algorithmically. As g1msd said, think about it from their perspective.
If it's one paragraph and you have tons of unique content underneath then why worry?
And who said you can't have it 'on the page' as far as humans go?
Use an iframe or Javascript.
The more I think about duplicate content though, the more I'm not sure where to draw the line. My logo, navigation bar, and menu structure all appear on every page. Is that hurting me? The same type of question can be asked of many of the elements on all category pages, all product pages, etc.
Here's where good semantic mark-up can be an excellent help to a site's rankings, IMO. If each "section" of a page is marked up within a container element of some kind, that makes this algorithmic job of finding the content section much easier -- and consequently the site is less prone to taking on collateral damage.
It's duplicated content within three major spots -- the content section, title element, and meta description -- that can cause trouble. By definition, these elements are "supposed to be" both unique and specific to the url where they appear.
What you're saying makes sense, but have you seen fairly concrete evidence of it?
One of the main goals for my HTML and CSS has been to have as little of it on the page as possible. Creating extra container divs would be going in the other direction, but maybe it would be helpful in this case.
You may have read the often repeated opinion that a backlink from within the content section is weighted more heavily than other links. This is one widely discussed bit of evidence, that somehow the algorithm is also looking for WHERE the link appears on the page, and not just THAT it appears. Also the recently discussed issues with "footer links" fall into this same territory.