Forum Moderators: open
Normally the site grows at a tempo of 200 to 500 pages a month indexed by Google and others ... but since about 1-week I noticed that my site was loosing about
5,000 to 10,000 pages a week in the Google Index.
At first I simply presumed that this was the unpredictable Google flux, until yesterday, the main index-page from www.widget.com disappeared completely our of the Google index.
The index-page was always in the top-3 position for our main topics, aka keywords.
I tried all the techniques to find my index page, such as: allinurl:, site:, direct link etc ... etc, but the index page has simply vanished from the Google index
As a last resource I took a special chunk of text, which can only belong to my index-page: "company name own name town postcode" (which is a sentence of 9
words), from my index page and searched for this in Google.
My index page did not show up, but instead 2 other pages from other sites showed up as having the this information on their page.
Lets call them:
www.foo1.net and www.foo2.net
Wanting to know what my "company text" was doing on those pages I clicked on:
www.foo1.com/mykeyword/www-widget-com.html
(with mykeyword being my site's main topic)
The page could not load and the message:
"The page cannot be displayed"
was displayed in my browser window
Still wanting to know what was going on, I clicked " Cached" on the Google serps ... AND YES ... there was my index-page as fresh as it could be, updated only yesterday by Google himself (I have a daily date on the page).
Thinking that foo was using a 301 or 302 redirect, I used the "Check Headers Tool" from
webmasterworld only to get a code 200 for my index-page on this other site.
So, foo is using a Meta-redirect ... very fast I made a little robot in perl using LWP and adding a little code that would recognized any kind of redirect.
Fetched the page, but again got a code 200 with no redirects at all.
Thinking the site of foo was up again I tried again to load the page and foo's page with IE, netscape and Opera but always got:
"The page cannot be displayed"
Tried it a couple of times with the same result: LWP can fetch the page but browsers can not load any of the pages from foo's site.
Wanting to know more I typed in Google:
"site:www.foo1.com"
to get a huge load of pages listed, all constructed in the same way, such as:
www.foo1.com/some-important-keyword/www-some-good-site-com.html
Also I found some more of my own best ranking pages in this list and after checking the Google index all of those pages from my site has disappeared from the Google index.
None of all the pages found using "site:www.foo1.com" can be loaded with a browser but they can all be fetched with LWP and all of those pages are cached in their original form in the Google-Cache under the Cache-Link of foo
I have send an email to Google about this and am still waiting for a responds.
Fingers crossed, the problem is being resolved. However, before everyone congratulates Google, bear in mind that no admissions, apologies or statements have been forthcoming. This is a very bad sign - a glimpse of the future perhaps.
kaled,
You are absolutely correct. I sent several emails of this problem to several of the goole emails and I didn't even get a response. Don'r get me wrong/ I am glad that whatever was wrong seems to be getting fixed. But...
To date. No response. Not even an acknowledgement.
a glimpse of the future perhaps
Hopefully not. Google may be a fine search engine but is currently in a commanding position as far as the web goes. Anyone whose future relies on the continuing success and progress of the web should hope that the circumstances that led to this thread become few and far between.
For average webmasters who face the prospect of investing their limited resources into a good honest website - whether it be an online business or a site whose presence simply enriches the web experience - the prospect of an unexplained and/or unrecoverable disaster (one of those commonplace and sudden disappearances from Google's SERPS for no apparent reason, for example) is currently a disincentive - and ultimately bad for everyone. The disincentive would be much reduced if Google and the other major search engines would make themselves more openly accessible to webmasters, even if it means paying a review fee or some such arrangement instead of having to resort to a forum like this. I don't mean moans about sites that drop a few places here and there, but sites that are hit by a bug or whatever one wants to call it. A situation in which one can get no intelligible response (to a genuine problem) from the "world's greatest search engine" simply isn't good enough and will become even less good enough if it holds back the web maturing itself out of the "Wild West" ethos it often seems to characterise. Or is it a former eastern block kind of thing, where the controlling forces are depressingly unaccountable and impenetrable?
Of course it makes interesting and informative reading, and in some cases a fair end result ensues, but steadfastedly pursuing such an issue in a forum thread shouldn't be the method people have to resort to - where sometimes a case is taken up (great!) and other times not.
no admissions, apologies or statements have been forthcomingI doubt there will be any. If Google acknowledges any shortcoming, someone, somewhere, will file a lawsuit. "I lost thousands of dollars because of your flawed search engine!"
"Why are you suing for millions?"
"Punitive damages! They will never do this again, to anyone!" (Meaning: I didn't do this to get rich but to help my fellow webmasters and teach Google a lesson. Now my conscience won't bother me at all as I enjoy my undeserved windfall. ;) )
[google.com...]
a glimpse of the future perhapsA glimpse of the present I think. :)
Call me idealistic, but if everyone behaved the same way, including big corporations, the world would be a better place.
Kaled.
One of my sites has been crawled at 01:05 to 01:08 GMT every day since September 8th, but moved an hour earlier, now crawled at 00:03 to 00:08 for the last 3 or 4 days.
I changed my links to static pages on Sept. 9th - everything got reindexed since than. A few days ago cached 302 redirects were rolled back to Sept.5th cache (I belive). I added a line in my robots.txt Disallow: *.php and changed 302 redirect to 301 Permanent even though I didn't have the actual links on my pages anymore - only direct static ones. Today I see in my logs someone was looking at cached by google redirect pages - I checked and YES they are recrawled on Sept.27 and 28th :( What the #()@k!?
I didn't write the code for the directory. Here is the redirection code:
[PHP]
{
header("HTTP/1.1 301 Moved Permanently");
header("Location: $url");
//die('<meta http-equiv="refresh" content="0;url='. $url .'">');
}
[/PHP]
as you can see I changed it from 302 to 301 and than I went back and removed the metarefresh line since the pages were re-cached. I hope it will solve the problem...
Suggestions are appreciated.
gemini
Our site was listed on the directory on sept 27-28. Right after I noticed our main domain page dissapear from google. The page is now cached under their link dated sept-28.
Here is the server header check results:
Server Response: [directorySite.com...]
Status: HTTP/1.1 302 Moved Temporarily
Server: Zeus/4.1
Date: Sat, 02 Oct 2004 05:52:16 GMT
Connection: close
Set-Cookie: hits=++2887+; expires=Sat, 02-Oct-04 05:53:17 GMT
Location: [mysite.com...]
P3p: CP="CAO DSP COR CURa ADMa DEVa OUR IND PHY ONL UNI COM NAV INT DEM PRE" policyref="www.somesite.com/w3c/p3p.xml"
Content-Type: text/html
X-Powered-By: PHP/4.3.1
Out of the 10 newest listing on this directory 2/10 had this problem, even though it seems all ~2000 links in the directory (I randomly checked two dozen) are setup in this method of 302 re-directions.
Any suggestions what I should do? Should I contact this site and ask to be removed or wait it out?
One-The problem was fixed.
Two-Apache 1.3x.
Ashdar, if your site is the one I think it is, the site linking to you is also using a meta refresh.
I had two separate directories hijack me recently. I had the first site remove my listing from the directory on September 2.
Both my index page and the page the site had the link on have been recrawled and have newly cached pages, however Google still returns that link the site used to have to me when searching for my domain name instead of returning my own URL.
The other directory removed the meta refreshes and ultimately changed his directory to 301s instead of 302s, but it doesn't look like Google has recrawled that page of his directory yet, so I still have hijacked results all over in Google.
I don't know what advice to give you.
I'll tell you that if this happens to me again, I'll try filing a DMCA complaint. Some people have had success with that.
Good luck.
PS-Can I put a link from my site to the page containing the Second Directory's link to me to try to send Googlebot over there faster? Or would that be a bad move in Google's eyes?
Here is something that might be of help (thanks to Vin DSL). If you can edit your .htaccess file, you can add the following code (only for unix based servers).
RewriteEngine on
php_flag display_errors off
php_flag register_globals off
RewriteCond %{HTTP_REFERER} ^http://(www\.)?offendingSite.com/.*$
RewriteCond %{REQUEST_URI} ^/.*$
RewriteRule ^.* - [F]
Replace "offendingSite.com" with the correct domain.
What this does is, any clicks from the site itself should get an error (403 forbidden) but someone searching on google will still be able to use the offending link to find your site since the referrer is google.
The idea is next time google crawls the offending site it'll get a 403 error and won't be able to cache your page.
I'm not sure if this will actually work on googlebot but it should if it checks like a browser. Perhaps some of the senior members who have experience with crawler bots can confirm this.
What's happening with this code write now is: following the link from the directory is resulting in this standard error:
Forbidden
You don't have permission to access / on this server.
--------------------------------------------------------------------------------
Apache/1.3.31 Server at www.mysite.com Port 80
but through google (if you do a search) the link works.
In addition, robots do not follow links directly as users do, they record them and visit them later, therefore no referrer data could sensibly exist. Whilst it is possible that some redirects are followed directly (unlike regular links) there would still be no referrer data.
This does raise an interesting point. Previously I have been almost certain that the fault lies in the indexing service, however, if googlebot is following redirects as browsers do (rather than treating them as links) then this might explain a great deal - in fact, it might explain everything.
I really hate being wrong, but I think my analysis earlier in this thread may have been.
Kaled.
An update, earlier my index page started appearing once again in google's cache (dated oct1). But the old hijacked link is still cached and ranks as #1 for my keyphrase where as the real link isn't even in the top 100.
Going to wait and see if things improve but if not I think I'll write to them to fix this.