Welcome to WebmasterWorld Guest from

Message Too Old, No Replies

My site has been First Now vanished from Google

My site has been the first of its kind, I drop off Google



6:35 am on Sep 20, 2005 (gmt 0)

10+ Year Member

For the past year I have experienced periodically being completely dropped off Google. My site has been the FIRST of its kind and is in all the natural search results on the first spot. I'm just a small business, but since spet of 2004 I have been vanishing off of Google every 6 weeks or so--recently it has been more often and for longer periods. Does Google discriminate against Older sites? Are they doing it so that we will advertise with them? Any help, advice, comment from a desperate single mother of 4!

bluegill catcher

4:23 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

Yahoo dump also....... also has anyone noticed a HUGE drop in yahoo traffic as of yesterday. I am not getting 85% of my traffic just from MSN, yahoo has plummeted and I was told my someone else, just now, that yahoo started a major update yesterday! May be as bad as google


5:03 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

In my case I think, I have found the source of my problem.

After Allegra I used robots.txt and URL removal console to remove duplicate content. This was in March. After that I continously had a robots.txt with

User-agent: *
Disallow: dup1.php

Google states that the content removed by the console will stay removed for six months.

My site came back with Bourbon in May. After that I made a mistake. I've added two lines

User-agent: Googlebot
Disallow: someotherpage.html

These two lines were a time bomb.

As far as I know now this entry "User-agent: Googlebot" stops Googlebot from reading the lines below "User-agent: *".

Google states: "When deciding which pages to crawl on a particular host, Googlebot will obey the first record in the robots.txt file with a User-agent starting with "Googlebot." If no such entry exists, it will obey the first entry with a User-agent of "*"."

To say it in another way: If there is an entry "User-agent: Googlebot" I will never read "User-agent: *".

And thus my duplicate files (for printing and mailing articles) were not excluded anymore from being read by Googlebot.

Now I copied the complete "User-agent: *" section to "User-agent: Googlebot". And I hope my site will return soon.

I can encourage anyone to check their robots.txt for the same possible problem. I had to learn that the hard way.


5:30 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

I notice the same things regards the filter. But not for duplicate content, it just shows our newer sub domains. Whereas the standard search shows our results as they were 6 months ago with the main sub domains we had then showing.

We do have tons of duplicate content, since we are a news site, so use agencies like Reuters, AP etc. But we run a huge amount of unique content as well. I don't think this is to do with duplicate content...very odd.

bluegill catcher

5:31 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

I have never used any 'robots.txt'tags or files in any of my html since 2001 to present.
I just use standard meta tags for description, key words, and title


6:05 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member caveman is a WebmasterWorld Top Contributor of All Time 10+ Year Member

I agree with Shri. The dup issue that steveb outlined so well explains some of what I see, but not all of it.

There are cases where established site homepages and subpages are holding their ranking for one phrase, but dropping out of the SERP's for another closely related phrase (when the site previously ranked for both) ... and where there is no evidence of dup content filters playing a role where pages dropped out.

They've tweaked something else IMO. Possibly related to linking/anchor text/kw patterns.


6:16 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member

almost looks like a template filter....

not the content thats filtered but repeated templates/navigation.....almost as though its NOT bothered about the content that goes along with it...


6:25 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

Strange, my main travel site is hijacked by a religious site. The religiuous site is an old site from 1998, no pagerank and powered by Jesus Christ according their logo.:) They copied the whole bible and my site. No adsense or anything so somehow I think they copied the whole DMOZ for no reason. Can't find my sitemeter as well on the pages but the rest of my site is under their URL inclusive my affiliate links and logo's. I am getting 20% of my usual traffic since 22 september. I wonder if some of my sales are from them as well.
I am not religious myself so I can laugh about it when a thief writes his site is Powered by Jesus Christ. But now I want be back in the SERPS. I looked up their address in Whois and mailed them. Maybe they are just ignorant.


6:38 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member

I also confirm that adding &filter=0 - brings back my website to it's original ranking.

IMHO: It's either related to 2 things.

1. Links

2. Duplicate content

For me, I have a few duplicators, but they are of such low quality, that it's unbelievable to me that Google is too incompetent to construct an algo that can't recognize between who is legit and who is not.

The other might be related to links. However, I build theme related links very, very slowly. Less then 5 a month.

Although it might be one of those 2, neither one is really a "glaringly obvious" problem.

Whatever it is, it better get rolled back.

[edited by: Freedom at 6:49 pm (utc) on Sep. 24, 2005]


6:49 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

Is it possible those duplicators use old sites to do their work? My duplicator has a very old site (1998) an no PR on the pages with dupe content and still it pushes away my PR5 site in the SERPS with my own content.&filter=0 bring back my site. Even when I try a whole phrase of my homepage only with &filter=0 I can be found but the duplicator shows up. I have the suspicion it has something to do with the age of the site, not the age of the page.


6:51 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member

> almost looks like a template filter....

soapystar, that looks to be the case with this site of mine... Same template throughtout the site. It's possible.

GG - why not request examples from webmasters.. just mention a code to add to those feedback forms!

[edited by: nutsandbolts at 6:52 pm (utc) on Sep. 24, 2005]


6:52 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

Well all I know is traffic is down from a few hundred thousand unique a day to 20-40 or so...who are probably the ones who visit every day.

We do have thousands of links, since we often break news or media so sites link to us in the hundreds each week...often in a very short space of time. Plus as I said we do have thousands of pages of duplicate stories, but that is the only way you can cover certain world events. Plus although we run a lot of original content as well we sometimes license that out too...

I do hope it changes though or we will be in some trouble. Just don't realise how dependent you are on one company. Guess this is a sit and see.

I also had a look on Alexa (I know flaky but gives rought idea). I noted all our peers and similar sites have followed us in a big drop in traffic last few days.

[edited by: FattyB at 6:54 pm (utc) on Sep. 24, 2005]


6:52 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member

I always thought a duplicate content penality might apply to a certain page, and not the entire site. Which doesn't explain a site wide drop in rankings. However, "link spam" could explain a sitewide drop in rankings, better then a duplicate content penality does.

Does Google think I am a Link spammer from scrapers?


7:00 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

> Does Google think I am a Link spammer from scrapers?

Maybe when the domain name of the scraper is registered a long time ago? It can't be the quality of the scrapers.


7:03 pm on Sep 24, 2005 (gmt 0)

WebmasterWorld Senior Member steveb is a WebmasterWorld Top Contributor of All Time 10+ Year Member

"Am I wrong?"

Um, yeah.

I don't think links, or templates or anything has the slightest to do with this.

As mentioned above, sites seem to manage to hold onto (or at least not drop much for) some searches, while being dropped hundreds of spots for most things (and seldom ever gone completely out of the top 1000). Also, pages on a domain that have not been copied in any way (like those built a few days ago) also have a mega-drop in rankings, from #1 when not filtered to down hundreds in the regular search.

This is domain related. Specific pages don't have to be copied to be filtered. At the same time, the ridiculously inflated page counts seem to always exist, and it appears (like to hear any exceptions) that you always have to be over 1000 pages, meaning you can never check to see what any of these phantom pages are supposed to be.

It seems awfully advanced for Google to recognize that a domain has some high threshold of copying by other domains, and thus gets filtered for almost all searches -- although this could be the same sort of ill-conceived notion as the establishment of the Supplemental index.

In any case, I don't think people should go too far afield with this, or read too much into it in tin hat ways. &filter=0 corrects the problem... in my experience, it *always* corrects it. That one bit of information should tell Google how they massively screwed up, and tell them what they need to do to fix it. If it is an overall domain level of content theft that triggers it, it is doubtful that we as webmasters can do much of anything about it, since by definition the contetn theft will be widespread, and more importantly, in most cases HAVING THE STOLEN CONETENT REMOVED WILL HAVE NO EFFECT, because it is in the supplemental index (in most cases) and deleting supplemental pages does not get them deleted from the supplemental index.

Google Guy(s) and Google Gal(s), you know what you did. Stop doing it. It accomplished nothing positive. The results are virtually unchanged... except you are filtering out many of the most respected (and stolen from) domains in every niche.


7:04 pm on Sep 24, 2005 (gmt 0)

10+ Year Member

Date of registry has nothing to do with it IMO.

My site scraped and now missing was registered by me in 98. Hard to believe the 250 sites that are listsed instead of me were registered before then. I'll bet not one of them was.

Can it be so difficult to sort this out?

What does google expect us to do rewrite the site for every update?

This 1014 message thread spans 68 pages: 1014

Featured Threads

Hot Threads This Week

Hot Threads This Month