Welcome to WebmasterWorld Guest from 126.96.36.199
Now this is interesting.
A site with 40 000 "real" pages and some 80 000 duplicate content pages excluded using robots.txt (it's a forum - see my prior posts about vbulletin) and still some 80 000 duplicate pages that are not yet so excluded.
Additionally some 500 000 non-thread pages also excluded in robots.txt and most of those already delisted. The whole site is listed as www; nothing is listed as non-www at all.
Looking purely at indexed threads:
site:domain.com shows 90 000 www pages all as normal results; including some duplicate content that will eventually be excluded.
site:domain.com -inurl:www shows 24 000 www pages all of which are marked as Supplemental Results and all of which also have an old cache date. This search should show zero results. It certainly should not be showing www pages at all, the search was for "-inurl:www". What is going on?
[edited by: tedster at 8:44 pm (utc) on June 13, 2006]
[edit reason] split into new thread [/edit]
Are others seeing this specific pattern?
Are there other indications that Google is reducing their emphasis on "longevity" or "aging" of links?
Are there other indications that Google is increasing the visibility of informational sites at the expense of ecommerce sites?
I guess it depends on the selected keywords that are searched for.
ECommerce sites will innevitably show up if the keywords are orientated towards them [ such as buy, sell, shop etc.] I haven't seen any priority given to information sites generally, unless you seek a keyword such as 'Statue of Liberty'.
Earlier in the thread, I noticed this comment about information vs. ecommerce sites:
"...One of my observations in my areas is the increased ranking of wikipedia pages on the updated DCs. ..It seems that the new knob settings favor informational sites above commercial ..."
Hence, the hypothesis. Of course, there could be other factors at play, as well. For instance, wikipedia has an extremely intensive, and diverse, pattern of internal links. So, for example, the observed boost to visibility of wikipedia documents could be due to changes in the way Google is evaluating internal links.
I have an informational site about an individual, "John Smith".
Site is almost 2 months old (no pr of course)
I have been watching 69 servers for about 4 days now.
doing a search for "John Smith" (without the quoutes), the results seemed to be divided into two groups.
Group 1 - was anywhere between # 91 and #95
Group 2 - was anywhere between #37 and #53
As of today, a few servers have flipped from one group to the other, but here's what I see:
Group 1 (33 servers) -
All but one datacenter (5 servers) have settled in at #100, the errant 5 servers are #95
Group 2 (36 servers) -
17 servers at #40
19 servers at #47
I'm not smart enought to draw any conclusions. I'm just offering this FWIW
any suggetions, insights?
Also, it looks like the NEW SERPs have retreated somewhat.
They were refreshing old Supplemental Results, older than 2005 June that is, among several other things.
Aha! Give that man a prize!
Ok, so what's it all mean?
I'm still of the mind that G had to dump or lost (for lack of a better term) it's data at some point.
Should we be expecting yet another "refresh/update" soon?