Welcome to WebmasterWorld Guest from 22.214.171.124
With reference to:
126.96.36.199 Copra Variant
188.8.131.52 Original Turd
184.108.40.206 Turd Variant
I get the same results on the first three.
Different on Turd variant
Do you think that Copra will propagate to all Google datacenters?
[edited by: tedster at 9:36 pm (utc) on June 21, 2006]
In the other thread is a list of 600 Google DataCentre IP addresses [webmasterworld.com]. It is both impossible and completely unecessary to check them all every time. All that is needed is a small, but representative, sample.
Lets clear just one thing up first.
If you pick any random class-C block, like 66.102.7.x or 72.14.207.x (there is a full list of blocks on Page 2 of that other thread), can we at least confirm that for all values of x that return results for that Class-C block (where x could be 17, 18, 19, 44, 80, 81, 83, 84, 91, 93, 95, (98), 99, 100, 101, 102, 103, 104, (105), (106), 107, 115, 133, 147, 184, 189, or 214 - but not all are active for every class-C block all the time) that the results are exactly the same for every IP on that class-C block?
If that is the case then you only need to look at one IP per class-C block, and to see a result from every block you only need to look at 41 class-C blocks in total.
That would be less work than most people already do now.
I'm seeing that too. I'm pretty much where I should be for the first time since April.
I just checked, it's not old data either. I see some pages that I added as recently as June 16th.
let's cross our fingers & hope that this data propagates across the rest of Google's DC's now.
A couple of things I noticed on the DC gcc_llc found.
1 - I'm currently showing around 850 pages on Google, on this center I was showing 9,900.
2 - The 9,900 bug comes in whenever my site jumps over 1,000 pages. I used -keyword site:sitename.tld along with keyword site:sitename.tld to figure I was showing around 1,100 pages.
3 - I found recently added pages (I checked around five) that were added only last week.
4 - I used the site:www.sitename.tld -inurl:www command and I still saw a lot of supplementals with currupt titles (about 225).
So while the page count was up in this index, it still had problems.
I'm very disappointed with Copra so far, and not just because I fell back 3 slots in the SERPs.
I'm not really hurt that bad. I know this is asking a lot, but try to step outside yourselves.
Maybe your site benefitted from Copra (more power to you if you did).
But, does the internet as a whole benefit if a niche devoted to a special island,
to a rare plant or animal, to flying saucers or exotic cuisines;
do we really benefit if the SERPs for these are now full of rock stars, motorcycle parts and the like?
For my niche at least, all kinds of poor and off-topic pages are displacing the authority sites,
not just mine.
Frankly, Copra looks more and more like MSN or Yahoo, less and less like Google.
I'm not saying G has gotten THAT bad, but the trend worries me.
Its as if they strongly derated DMOZ recommendations and their trademark pagerank factor in the main algorithm.
Personally, and leaving my own site aside for the moment, this could be a serious blunder.
Think "New Coca-Cola".
If OTOH this is simply an experiment, its well timed with the onset of Summer Doldrums.
This would explain the "calibrated dance", a steadily sliding scale
i.e. dumbed-down results appearing on more and more DCs until they are almost all Copra,
and the changeover, now nearly complete, done at a constant rate.
Sounds like something dreamt up by a mathematician, one specializing in statistics. -Larry
PS: I'm on my second Beck's Bier. Good stuff. St. Pauli Girl and Grolshe are chilling in the fridge.
Life is good. Copra only crapped on my site a little bit.
"Its as if they strongly derated DMOZ recommendations"
At last, google doing something positive! google cant derate dmoz recommendations enough as far as im concerned. With so much corruption at DMOZ why should google give any more weight to a dmoz recommendation than they would to say a recommendation of a site in a blogg.
If google have a perfect algo the last thing they want to do is introduce any form of human intervention from third party sites that will corrupt the natural SERPS.
Whilst googles results are not perfect they are still ahead of the rest by a mile
I can categorically state that UK results over the past 7 days have not been entirely based on 'previous data'. The reason for this is the re-launch of a site (which was recieving very little traffic) that has had consistently good traffic for the past 7 days, with the traffic starting just a week after the change (12th June).
This suggests to me that no matter what is happening to previously indexed pages, new pages are making it into (and staying - albeit only for a short period so far) the index currently.
Are there theories on reverting to old data sets that take this into account? I know that it's possible that our traffic will dissapear shortly, maybe as a result of leaving any 'fresh' result set (if it exists).
I'm just curious as to how new results fit into the whole update process.
Those 3 DCs sets I mentioned in my previous post, aren't giving special love to Reseller :-)
But I have mentioned them because of the search quality, I could see some fresh cache on them and they are showing the real title of my site and not that of DMOZ.
In general, affiliate program marketing friendly DCs usually are Reseller friendly DCs too ;-)
For one reason or the other, since Matt Cutts went on his long vacation, it seems the remaining folks at Google Search Quality Team don't like affiliate program marketing that much.
Ok. Matt shall be back in office again around 3rd July, and I hope he will see to bringing back those affiliate program marketing friendly DCs :-)
hmm might want to count on him being back around July 7th... We here in the New Colonies have a big holiday around that time ;)
Turd DCs, in my niche, return SERPS that are missing some important sites that were listed pre Big Daddy. I can see absolutely no reason for these sites being dropped. Also Turd seems to give a bit of a lift to pages with many purchased back links from just a few sites this is very close to officially endorsing spam. There are also more directories and screen scrapers.
Copra looks much more like the pre Big Daddy SERPS, all of the major players that you would expect to find in the top 20 are listed. There are fewer screen scraper and script generated, no original content pages. Copra is not perfect but in my niche it is significantly better than Turd from a purely objective point of view.
I can live with what I see, but the first one gives a 404
What struck me as odd is that on all these DCs, while the first page or two of results seem to be rock-solid in terms of consistency, when I do a search for my two word term, my site comes in at #41 and if I do an "advanced search" and request 100 results per page, I come in at #48.
Shouldn't the results be the same?
For me, AS A SEARCHER, it is important that I find widget sites on the search for widgets and NOT sites selling books on widgets (otherwise I would simply search for books on widgets).
I'm neither searching for the definition of widgets otherwise I would put the words "widget definition" in the search bar.
I am CERTAINLY NOT interested in getting yahoo nor dmoz on the search results. I've already chosen Google, why THE HECK do I need to be "re-directed" to another search engine :(
Last but not least, I am NOT asking G to summon me on how bad for my health widgets are. If I want, I can make a search on "widgets for my health". So two sites displayed on copra (huge edu or gov inner inner pages) should be OUT from the first page.
Yes, this is what I am seeing on copra first page - true KOPRA. Amazon, Dmoz, Wikipedia, Yahoo, a gov site and an edu site.
Worst, Amazon is serving TWO pages, as well as the edu site - there's hardly any more room for anybody else on the first page.
Oh by the way, did I tell you?
I AM INTERESTED IN WIDGETS! ONLY WIDGETS! PURE GOOD WIDGETS, THAT'S ALL I WANT...