Welcome to WebmasterWorld Guest from 220.127.116.11
A lot of members are seeing huge sites going supplemental. One of our main sites lots all rankings and 200,000 + pages disappeared and now we are left with 19k useless results. This could be a goof or it could be a new round of penalties. If you have had your site reduced to the 'sup index lets here about it and compare notes.
Could it be that when a 301 redirect is setup on a domain from non-www to www that it takes a few months for the G index to recognize this change and in the mean time you go supplemental?
Just a guess...but I am trying to find something in common with everyone in this "club" so it can be resolved.
"Based on our various research efforts, we believe that most of Google's near-term server purchases will use AMD's Opteron for the first time," the analyst wrote.
A source also in the financial analyst community said, "I heard (about the switch) a long time ago. Word is finally leaking out. I heard that Google was in the process of switching to AMD, while Google was on stage at the last Intel Developer Forum."
For some reason the fine members of WebmasterWorld have had enormous difficulty over the years grasping the simple point that the question is not why would google go to opteron servers, 64 bit, but why they wouldn't do that.
Big daddy, as matt cutts said, is about new infrastructure, and this is what the new infrastructure is. The only things that surprise me are:
1. Why did it take google so long to switch over?
2. How did they keep it out of the news enough so that only server types in the know knew?
Obviously word must have leaked out, what astounds me is that google kept this from leaking out here the way you think it would have. I have to give google's little group of undercover posters some credit, they successfully redirected threads about this topic more than once.
So think 64 bit systems, just like it was always said would happen, probably with 40 bit indexes, just like it was said would happen. New infrastructure in this case means exactly what it says.
re forum owners: it's hurting forum owners who use that system, which I've always disliked intensely. Optimized forums don't need doubled data to make search engines happy.
I was checking my server logs for the month and noticed that both the new mozilla google bot and the older google bot are downloading my robots.txt file often, but the perplexing thing is that on some occasions when the mozilla bot downloaded the file it returned a 301 status code, while all others were 200.
Why would it return a 301 code when the file is not redirected and never has been, while at other times it returns code 200.
Is this something that I need to be concerned about?
sesnyc06 [at] gmail.com has mail. :)
Gone for some examples in the seo industry and some of the big forums that have been discussed elsewhere.
Talking about previous behaviour - for me Mozilla Googlebot has never been great at adding pages to the index - even when it appears to have been working correctly - so perhaps this might be related - something to do with the crawl to indexing behaviour?
4 of the requests were by the googlebot 1.0 with a code of 200
7 were by the mozilla bot 1.1 with 6 giving a code 200 and one giving a 301 and one thing I did notice with the 301, there isnt any date and time in the server log if that makes any difference
All of the sites we set 301 redirects on (after years of being online w/o the 301s) only have their homepages listed. All other pages are not even listed as supplemental. What is up with that? What sign of things to come is this?
same thing with me, all 301 pages are missing with the exception of homepage.
<Sorry, no email quotes.
See Terms of Service [webmasterworld.com]>
[edited by: tedster at 4:59 am (utc) on Mar. 10, 2006]
Google's counts for total number of hits have been totally bizarre for nearly two years now. They are not just inaccurate, but sometimes they are inaccurate by an order of magnitude. You cannot believe any number over 1,000 reported by Google, because anything higher than that is not verifiable.
In November 2004, Google increased their total count on their home page from 4 billion to 8 billion overnight. All the Google lovers out there bought it hook, line, and sinker. It's the ad money that distorts their perceptions.
I'm not eager to get flamed for the zillionth time on the 4-byte ID problem, but if GoogleGuy wants to deny it once again, for the record, that would be fine with me. He may also want to explain the last three years of weirdness for Google's generic results.
Let me say that there was some speculation a year or two ago about Google moving to 64-bit computing. This made a lot of sense, because as I've tried to explain many times, moving from a 4-byte (32-bit) docID to a larger docID is not a trivial matter. Not only would you have to rewrite a huge amount of software (the docID, which is unique for every page on the web, is ubiquitous throughout Google's entire system), but you would take a performance hit because it requires extra processing cycles to expand beyond 32-bit numbers if your processor can only chew 32 bits at a time.
It would make a lot of sense, if you are a Google engineer figuring out what to do back in 2003, to stall on the docID problem until you can migrate to 64-bit processors. For one thing, Google got a lot richer and 64-bit processors got a lot cheaper at the same time. For another, there's a new trend toward more processing power per watt, and Google's huge electric bills are a source of concern to them.
As of less than a year ago, Google was still on a 32-bit system, according to this quotation from CNET News.com by Stefanie Olsen, on April 21, 2005:
Google executives also were asked about innovating in server architecture in the future, given that one of the company's biggest rivals, Microsoft, is developing search tools on a 64-bit architecture. Google currently runs its search service on a 32-bit architecture. Search experts say that platform may allow for advancements such as better personalization. Google co-founder Sergey Brin downplayed the importance of the underlying architecture. 'I do not expect that the particular choice of server architecture is going to be a deciding factor in the success of our service,' he said.
Okay, go ahead and flame me. I can take it because I'm used to it.