Forum Moderators: Robert Charlton & goodroi
Having the same META DESCRIPTION tag on more than one pages will push all except ( the highest level ) one to the omitted results page for being "too similar"... completely regardless of its content, title, size and whatnot.
I've checked and a single word of difference seemed enough for Google to believe that the page isn't that similar after all. Not that i've experimented with it, only it was obvious once i checked the results.
Furthermore it's not some dreadful flag, for the pages ARE indexed in Google, and if you do a search that's relevant only to such a "lower level" omitted page... it does come up. It comes up when the higher level page is no longer relevant.
I'm puzzled, just what IS dupe content? If the triggers for duplicate content can't handle different code... nor can the duplicate code filter handle a single word of difference. I did a search on an essay i wrote for a site some time ago... and found it on three different corporate websites as their "mission" or their about us page... ( No kidding. Talk about lack of self-awareness or creativity. ) My text was basically copy/pasted into a page with some different code around it, and from that point it wasn't dupe content i guess. Their PR wasn't 0, they were indexed in Google.
...But i do have an exact question this time as well.
This might be interesting for us for we have thousands of pages for each larger pic on our site, the title and content is unique, the description is not, and we need to find a way to make it so. But we don't have the resources to write thousands of unique descriptions AGAIN that would be different from the title, this is a non-commercial site.
The question is...
If the page meta description is unique and relevant to the page... but it's the same as the page TITLE... that's probably spam, right?
But... if the page TITLE is completely unique to all pages... and we'd like it to be featured in its whole BUT with some additional information in the META DESC tag...
Is that a no-no as well? We've had enough trouble already and the least i'd want when trying to reach SOME more people is to be banned by some algo for overdozing on relevancy :-P
I mean are pages being moved to supplemental results / omitted results / out of the index because of such a thing?
There are the "exact duplicates", due to poor site design:
- same content at URLs with multiple different parameters (use meta robots noindex on all but one version),
... eg: /shirts.php?size=16&colour=blue vs. /shirts.php?colour=blue&size=16
- same content at www and non-www (use 301 redirect to fix)
- same content at multiple domains (use 301 redirect to canonical domain)
- http vs. https (use a 301 redirect or 404 on https to fix this)
.
There are "pseudo-duplicates" caused by have the same title and/or same meta description on multiple pages.
These are easy to fix. Make sure that every title tag and every meta description is unique.
The indication that you have this problem is seeing the "click for omitted results" link early on in a site:domain.com search (e.g. "Results 1 to 4 of about 750 pages").
.
There are "syndication duplicates", where the same news article is posted to dozens or hundreds of sites.
Since the navigation code, and the page filename, and the site structure, and the on-page code is different, then this is not a true duplicate and Google will happily list multiple copies, but they do filter at least some of them out.
The exact filtering criteria is completely unknown.