Welcome to WebmasterWorld Guest from 188.8.131.52
A lot of members are seeing huge sites going supplemental. One of our main sites lots all rankings and 200,000 + pages disappeared and now we are left with 19k useless results. This could be a goof or it could be a new round of penalties. If you have had your site reduced to the 'sup index lets here about it and compare notes.
1 site went down to 36 pages indexed (~400 pages on the site for real). Traffic is down about 40% for the day (60% drop since the beginning of the year, but some of that is expected and normal).
All supplemental (except the home page, which ranks as it has forever now). some of the pages google shows in the site: command have been deleted for over a year.
I also show urls like:
that were only used for PPC campaigns in 2002 and 2003. Where is google pulling these urls from? It makes little or no sense.
"site:" returns 3,77,000 results.
Search Results returned: 14,700,000
"link:" returns 3,340 results.
"inurl:" returns 3,72,000 results.
"site:" returns 2,88,000 results.
Search Results returned: 51,800,000
"link:" returns 3,340 results.
"inurl:" returns 41,100 results.
The above drastic differences doesnt let us make any proper analysis of what is going around.
It seems a majority of the Google SERVERS have been severely hit by a virus and thus leaving all of us perplexed.
But again what proves everything wrong is we have not been losing significant traffic as of yet. But am not sure how long this will continue?
joined:Apr 13, 2002
They also mentioned that many of the competitors in their niche also were tanking, which I thought sounded odd. Anyone else seeing something similar?
2. It seems my old site of 300 pages went to supplemental all except the home page. However, there are still about 30 pages without a supplemental label located at # 130 to ~ #160. This is not new since I have noticed them since the day I heard about BD.
3. I don't see any drop in referrals since most of supplemental pages have been deleted, 301 moved or un-updated for a long time.
On shared hosting.
Not hit by supplementals, but a entire DMOZ-listed directory is simply -- gone -- . Nowhere.
The Mozilla Googlebot does crawl the stuff, though. Daily.
[edited by: bull at 7:41 am (utc) on Mar. 3, 2006]
Coz, i noticed BillyS post....
2,470 from Googlebot
3,010 from Mozilla Googlebot
if that is true then , i have to check the difference for my site too as coz' mozilla googlebot dont crawl my site.
The decline in traffic started barely noticeably on the 24th Feb and is slowly but surely getting worse.
It's scary :-(
btw..does google define phd as "post hole digger"?
they certainly can't mean a college degree
I remember during the Dominic update there was a real horrible dc (and old data btw) and GG said there is no point concentrating on that dc and it relieved a lot of stress amongst some members.
It tends to be the sites with homepage canonical issues that have really been hit hard (yet again - thanks Google)
I notice that a competitor to WebmasterWorld (and it has been noticed on the other forum) - Seotalk (or something like talk - dirty word here?) has gone supplimental in Big Daddy.
Sites which tend not to have their homepage first in a site:www.domain.com check looks like they tend to be hit - to me this has always looked like Google has struggled to find the root of the site. Although still often in a site check the homepage is not first - it is intresting that it is the only page left in the index in a lot of cases (excl supplimentals)
When using Google "define:" command my site is used to define most of the terms in my industry.( meaning they found my site useful & informative). This page is lost in the index too as well as all other pages beside the home page. How can I define terms for Google and get a penalty...I hope its just a bug!
Anybody here knows how the determine which site will be used to define terms?
The main difference I have found is the cache date, for my site and others in the BD related datacenters those ranking and without significant supplemental issues all had cache dates of Feb 2006 mine and a couple of others with supplemental problems (only homepage ranking) on their homepage the cache date was in Feb 2006 the rest of the pages, all supplemental, showed cache dates of jun-aug 2005.
On my default DC I am fine my cache date ranges from feb 14-march 1 2006.
pages indexed should be about 3000-4000
default DC 9,340 down from 10,600 since about Oct or so not sure when the inflated number showed up
Draw your own conclusions. I would love to hear what your conclusions are.
All but 2 pages still in supplemental.
edit: most of the cache dates are from July/August 2005
Moz Gbot still crawling about 40k pages a day.
I know what the Maytag repairman feels like. It was a pretty boring day - no orders, no customer service requests, no complaints and of course no $$.
[edited by: classifieds at 10:19 am (utc) on Mar. 3, 2006]
Welp I am in this club once again. It can't be that hard to understand our root for lord sake. Every thing is 301'd how he hell hard is that to get!?
This is quite similar to the last year hit we took. Pages went supplimental. Old cache dates appeared from month's before. Old deleted pages appeared back in the index. Pages started to disappear for no reason. Traffic tanked. Pagerank went down the tubes. We finally get pages back into the index and they go do something like this again...BAH.
I do hope they are working on something big this time and we all can come back to our former glory.
Yes, but the point I am making is that Google do now know what the root is - it is the only page that is not supplimental!
Soooooo - either this is a penalty/problem.
hopefully Google will now realize that this is the root of the site and recrawl correctly from there.
site 301 since 2002 with no probs
>> for people who have supps, are these pages dynamically generated or hard coded?
ALL my pages have some form of php that are gone, eg:
>> VB forum
>> header and footer includes
my HP which is just hard coded is OK
However, my site is still all supplemental on BD apart from the homepage. This is killing me.
For us this is also true and that is a good sign. But good lord it took them more than a year to figure that out! I just hope that this time it is the fix.
If they do know that it is the correct version then why is it still not in the first position and why are others here reporting that their site went supplimental when their root is in first?
Everytime a new DC is updated to BigDaddy, my stats show an increase of about 10% of visitors.
It seems that sitemaps are helping me to survive in BigDaddy. For the old DC's, sitemaps seems to not help at all...
May be a good sign, is there a way of seeing what dc the robot is crawling for? As my site does well on non BD dc's so if i could tell what dc the robot was from it could be confirmed as a glimmer of hope.
Supplemental results are usually attached to an older version of a page. A page may appear as a normal result with a modern snippet when you search for words that are currently on the page.
If you search for words that used to be on the page, but are no longer in the current version, then the page can appear as a supplemental result. The cache shown will be modern, but the snippet will show the OLD content.
If the page URL no longer exists on the web (page gone, or whole site gone), then there will be only the supplemental result, and Google holds on to those for at least two years.
If the page still exists then Google will very often have both a normal result and a supplemental result for that page: and different keyword searches will bring either one or the other into view.