| 11:14 pm on Oct 22, 2006 (gmt 0)|
Interesting to see how that develops...
| 9:21 am on Oct 25, 2006 (gmt 0)|
Any developments on your test? The rankings are PR are exactly where you described still? Has the recent 20th october minor refresh/ push has had any effect?
Also, was this test done on a domain which was already doing well in Google with no apparent penalties etc?
The results of the test are definitely interesting - and in a way, the duplicate titles and description issue is making us all move a little more towards making sites for Google more than for users.
From my (journalist's) point of view, it shouldn't matter if the pages of a photo gallery has identical titles or no titles or descriptions etc.. Fine if they don't rank as long as one of them gets indexed. However, if a few galleries like that result in a duplicate content penalty which can bring the entire site down, that sure affects me.
I am currently in the process of scanning a pretty large site for duplicate content issues, and have found many. One of culprits could be our photo gallery pages with minimal text and similar or useles or zero titles and descriptions as we never expected any users to land there directly from google. However, now we have disallowed any indexing on those pages, except a single page in every gallery and it has returned to the SERPs.
Essentially, I am bothering about google a bit more than I would have liked.
Coming to that, do we still have some clear proof that internal duplicate content issues (titles, meta blah blah) etc defintely would result in bringing down the entire site instead of those pages?
| 10:11 am on Oct 25, 2006 (gmt 0)|
i created 8000 Pages with similar content only 3 numbers changed on each Page.
Similar Meta Keywords
all 8000 are indexed and are ranking well.
| 11:03 am on Oct 25, 2006 (gmt 0)|
I dont know whether to laugh or cry, Reilly!
| 11:17 am on Oct 25, 2006 (gmt 0)|
Reilly :yes like the next comment cry or get a gun and blow my brains out:)
now more serious:
" including exactly 30 words of unique description in the meta d tags"
That is the key I recon, I know a site I can give you the URL if you want that all of the thousands of pages have the same title BUT the d tags are unique......
| 1:28 pm on Oct 25, 2006 (gmt 0)|
Very suggestive, because we've seen the inverse situation so often: 100s of unique words of body content getting filtered out as duplicate/supplemental if the meta descriptions are the same. And that situation gets fixed just by writing unique meta descriptions.
So, is there some kind of checksum on the meta tag as a first quickie look? Just for lower PR pages? Just a place holer until a deeper look takes place further ow the road? Meta description über alles?
Seems wrong somehow, when body content is essentially what the search user is looking for.
| 6:26 am on Oct 26, 2006 (gmt 0)|
Tedster, does the duplicate meta descriptions/ titles problem bring down entire sites or only those pages where the problem exists?
| 7:05 am on Oct 26, 2006 (gmt 0)|
The sites I worked on had universally duplicated meta descriptions and titles. There was a "totally supplemental" profile in the site: operator except for the domain root which was not supplemental, and they were getting the merest trickle of Google traffic. They rebounded within 10 days of fixing a couple hundred titles and meta description to their previously healthy traffic levels.
High PR may make a difference -- these urls included a few PR4 and on down.
| 11:30 am on Oct 26, 2006 (gmt 0)|
You know i'm still not sure meta tags have this much power...
Okay so this wasn't a "test" about duplicate content, but involves a couple of thousand pages. Some new, some old, some high PR, some no PR, some trusted, some not yet trusted, but all with unique content, titles, and same meta descriptions. And this is how it went...
I don't have many sites, only two. One is that we launched recently and one that has been around for years, a hobby level online magazine we run.
The new site was launched with thousands of pages that had unique content, unique titles, but same meta description for all the pages of entire albums. No PR was assigned until like three months from launch, and there was definately a lack of trust in the beginning.
The older site had the same... unique content, unique titles, same meta descriptions. Trust is pretty stable, PR is distributed in a very meaningful way, for the initial navigation is in flash, but all articles are accessible from anchor text links on the home page... listing them in a table of contents fashion. Hundreds of links on the home page. Furthermore this site had an identical mirror site with a copy of all of the pages on a different domain. It was copied there every day, overnight with a cron job set in place.
Indexing the new site took weeks, and pages with duplicate meta tags wouldn't show up for site: operator searches, unless you clicked the omitted results link. Same goes for the old site.
But if you did click the link, pages were there.
Not supplemental, not dropped, just "very similar" thus not showing up if there was a higher level URL with the same description.
Once the meta description tags were edited to be unique as well, these pages started to show up in site: searches. All pages that got crawled got re-indexed instantly.
If a brand new site with thousands of URLs out of the blue, and a site that has hundreds of links on its homepage, AND a mirror site... did not go supplemental just because of meta descriptions... i'm not sure if you can conclude that this parameter is the one making other sites vanish.
This is just my experience.
We had no other problems on either sites.
Content and title was always totally unique.
( mirror site is now redirected to main site, all canonical urls fixed just in case... but all this after having no problems at all, and after the October update. The mirror site PR fell but one point from 4 to 3 i think. Its pages don't show up for searches anymore. The main site still is top 10 for everything it could be. )
What i can imagine is that same meta/title combinations send the pages over to another filter for further examination. What i can't believe is that they are responsible for pages that go supplemental. Why would they? If you enter a meta description of let's say only 5 words, because you don't stuff it wall to wall with your keyterms, there's a very good chance others used the same sentence. Hom many trillions of pages, not sites, pages are ( or will be ) out there with such "unique" descriptions? This parameter might be the reason for a closer look for duplicate content but i'd be surprised if G saw it as reason for a penalty/reason that makes pages go supplemental.
And i'm only saying this because we have no experience of this happening.
So i guess there's probably something else involved too...
...which i'd like to know about >:)
Mm. That's my reason for this post. I think.
| 7:45 pm on Oct 26, 2006 (gmt 0)|
Now here is something strange. There is a site where I am trying to fix the duplicate content issues. The site has 3000 pages approx. Google shows the count corectly in a site: search.
But if I go through them page by page (even after clicking the omitted results link), The pages end at no. 1000!
But the rest of the pages (almost 2000) are indexed and ranked by google! So why do they not appear at all?
This is something I can see with Yahoo too - if I try to see all the pages, they end at 1000, though yahoo sends visitors to all pages on the site!
| 9:20 pm on Oct 26, 2006 (gmt 0)|
|i'm still not sure meta tags have this much power |
I'm not sure they do right now, either, as something may be changing in the way Google is handling this area. Exactly as you mentioned above, duplicate meta description URLs now seem to show more often as "omitted results" where they use to show as "Supplemental"
But I am sure that for some sites earlier this year, their troubles were just this simple. Matt Cutts even discussed an example in his blog a while ago.
| 12:34 am on Oct 27, 2006 (gmt 0)|
A friend with a small site had an entry in a local business directory. The entry had similar content to his own "contact us" page. His site sat at #1 and the directory was at #3 or #4.
Within days of the directory changing their meta tags for his entry, to be same as those found on the "contact us" page of his site, the directory page dropped out of the index.
This was at least a year ago. The "checksum of the meta tags" (or similar) is some sort of factor in this.
| 5:39 am on Oct 27, 2006 (gmt 0)|
|Any developments on your test? The rankings are PR are exactly where you described still? Has the recent 20th october minor refresh/ push has had any effect? |
Also, was this test done on a domain which was already doing well in Google with no apparent penalties etc?
The test was done on a domain that was doing well Google and ranking pretty much where it should be. A look tonite has shown no changes whatsoever in the caching of those test pages, pagerank etc.
I am very interested in what Reilly has posted regarding his site pages.
Is it possible that pr, inbound deep links and populairty also have an impact on the amount of leeway given to meta tag duplication?
| 5:40 am on Oct 27, 2006 (gmt 0)|
|Exactly as you mentioned above, duplicate meta description URLs now seem to show more often as "omitted results" where they use to show as "Supplemental" |
Effectively the same thing from a ranking standpoint, no?
| 9:22 am on Oct 27, 2006 (gmt 0)|
What if meta description is not "exactly same" but "almost same", is that count into this problem?
I mean if I wirte 2 sentences in meta description and I include main keyword of the page twice in it.
and then I did the same thing to other pages but the only thing that change is keyword for each page.
on the other pages I did:
Is this count into an identitle meta description? My site is now in supplemental result too.
Any advice is much appreciate.
| 10:38 am on Oct 27, 2006 (gmt 0)|
>> Effectively the same thing from a ranking standpoint, no? <<
No. Not at all. They are completely different things.
| 11:33 am on Oct 27, 2006 (gmt 0)|
If a page slips into the "omitted results" area, that may affect its rankings compared to other pages with the same meta description or within the same domain, but... not compared to other pages on other domains... so basically not compared to the rest of the net. Its effect and the resolution is not drastic enough to be hoping for anything that is :P
For they're in the index already, and will come up fine whenever they're the most relevant out of the many, with which they share the meta. ( Higher directory and PR level pages come up first, then in order of relevance. )
On the other hand... supplemental results are like side "B" of the net... you just... don't find, and thus don't click them unless you're looking really hard for something that's not covered by others anymore. ( theoretically, right? )
Omitted is just lightyears better.
No make that... there's nothing wrong with it.
I've tested this for some time... okay not tested, but came across this phenomenon, and unique meta descriptions didn't affect rankings in terms of PR or trust or whatever but... they do add to relevance.
And on-page relevance is still an important factor... although seemingly far not as important as others :P
( PR, trust, links, inbound anchor text relevance, relevance of referring pages, domain age, whatever... )
| 6:25 pm on Oct 27, 2006 (gmt 0)|
So you are saying that the omitted result pages will rank as well as the regular listed pages for a given search term>?
| 6:27 pm on Oct 27, 2006 (gmt 0)|
Yes. Omitted results are omitted due to the way that Google uses clustering of results [threadwatch.org].