Forum Moderators: Robert Charlton & goodroi
Now this is interesting.
A site with 40 000 "real" pages and some 80 000 duplicate content pages excluded using robots.txt (it's a forum - see my prior posts about vbulletin) and still some 80 000 duplicate pages that are not yet so excluded.
Additionally some 500 000 non-thread pages also excluded in robots.txt and most of those already delisted. The whole site is listed as www; nothing is listed as non-www at all.
Looking purely at indexed threads:
site:domain.com shows 90 000 www pages all as normal results; including some duplicate content that will eventually be excluded.
site:domain.com -inurl:www shows 24 000 www pages all of which are marked as Supplemental Results and all of which also have an old cache date. This search should show zero results. It certainly should not be showing www pages at all, the search was for "-inurl:www". What is going on?
[edited by: tedster at 8:44 pm (utc) on June 13, 2006]
[edit reason] split into new thread [/edit]
All looks turd results to me - I can see the differences between the two but it looks very similar to what happened to one of my sites for a while last year - it was sort of stuck in limbo with some DCs showing a slightly different set.
Eg. I think this is what people might be seeing between the "turd" and the "copra" set - especially as they have been around for what - 3/4 weeks?
However, we must be getting close to a data refresh or proper update soon.......
I know Sid, I just chimmed in for some diversification and I know it’s against the toes
Personally I don’t like any of the craps. I’m not around here too often and I’m wondering, what happened to “Matt’s DC (remember? 66.249.93.104) As a user, I do prefer the results I’m seeing there. Or is this absolutely obsolete.
Or is it that Matt’s with the cats and the rats are googling -I mean on vacation
I mean - one dc has 15 million results on day, another day has 70 million and another 150 million.
Hence why I'm going up and down like a yoyo.
Is google testing and preparing to slash the number of results 'found' and testing out lower numbers found clickthrus and percieved satisfaction and relevence?
The lower numbers do seem relevent to me.
While doing some searching for the word which describes people who are followers of a major Western religion I noticed some unusual results.
When I search on 216.239.59.104 which is my current default google.co.uk DC I see a straight forward serps page.
When I search on www.google.co.uk the page is split up. At the top is a link to a map then there are 4 results as normal. Then a thin blue line and three uncached results on a broader term followed by a thin blue line and 3 normal results.
When I do the same search on www.google.com which is 216.239.59.104 I get the normal 10 listings I would expect to see on a normal SERP.
I've tried this using terms which describe the followers of other faiths but this does not work.
If you search for the term which describes the office held by George Bush on www.google.co.uk you will see what I mean, but perhaps only if you are in the UK.
Sid
I see what you mean I haven't seen that before but maybe they use it for very heavy keywords but I have tried other searches and can't find any instance on it.
On another topic My default google search has changed to this DC 66.249.93.104 and it is the "turd" results. Why can't they just choose one set and display that to everyone this is a nightmare.
Pete
Do Google partners have their own special DCs? Are they usually first or last when a change happens, or are they always different?
Thanks for any insight.
The kopra dc's (who came up with that?) seem to have amazon book pages showing up in many more top 20 results than ever before.
Thus, if you searched for widget, within the top 20 - 30 reulsts will be a book from amazon that has widget in the title.
Does anyone else see/notice/feel this?
Google seems confident that the authority pages should stay at #1, #2 and #3 and the wikipages belong on position #5 and #6. It looks like Google is now testing randomly which other sites should be placed on the remaining locations.
[nytimes.com...]
Maybe when this comes online some of the current problems might be solved?
This is absolutely confusing. As an authority site that is vanishing on the below DCs, we're naturally concerned.
64.233.161.107
64.233.161.147
64.233.161.99
64.233.161.104
216.239.39.104
216.239.39.107
216.239.39.99
Do you still rank for other phrases? The site in question for me is still #1 for the plural form of the phrase on the DCs, but just disappears for the singular form -- are the DCs the same ones you disappear for, or are you seeing something else?
I'm having a hard time getting to what I consider the 'bad' results; I have to snake through McDar's tool just to see.
On the sector I watch here’s what I see on the first page:
- A site selling products – only products
- A list with “Click a letter below” and a bunch of links ONLY – NO content whatsoever!
- A .gov page
- A site with not much content either and links pages pointing (among others) to casino sites, etc (TOTALLY irrelevant, that is)
- A dir.yahoo.com page
- Amazon
- Wikipedia
Site which formerly was ranking #1 (purely informational) is now on 6th page. We're also on page 6.
Congratulations, Google, now go build the new two computing centers, in The Dalles, Ore and get back the results showing on Matt’s DC 66.249.93.104 – seems like you can’t do both at the same time (building computer centers and search results)