Forum Moderators: open
Normally the site grows at a tempo of 200 to 500 pages a month indexed by Google and others ... but since about 1-week I noticed that my site was loosing about
5,000 to 10,000 pages a week in the Google Index.
At first I simply presumed that this was the unpredictable Google flux, until yesterday, the main index-page from www.widget.com disappeared completely our of the Google index.
The index-page was always in the top-3 position for our main topics, aka keywords.
I tried all the techniques to find my index page, such as: allinurl:, site:, direct link etc ... etc, but the index page has simply vanished from the Google index
As a last resource I took a special chunk of text, which can only belong to my index-page: "company name own name town postcode" (which is a sentence of 9
words), from my index page and searched for this in Google.
My index page did not show up, but instead 2 other pages from other sites showed up as having the this information on their page.
Lets call them:
www.foo1.net and www.foo2.net
Wanting to know what my "company text" was doing on those pages I clicked on:
www.foo1.com/mykeyword/www-widget-com.html
(with mykeyword being my site's main topic)
The page could not load and the message:
"The page cannot be displayed"
was displayed in my browser window
Still wanting to know what was going on, I clicked " Cached" on the Google serps ... AND YES ... there was my index-page as fresh as it could be, updated only yesterday by Google himself (I have a daily date on the page).
Thinking that foo was using a 301 or 302 redirect, I used the "Check Headers Tool" from
webmasterworld only to get a code 200 for my index-page on this other site.
So, foo is using a Meta-redirect ... very fast I made a little robot in perl using LWP and adding a little code that would recognized any kind of redirect.
Fetched the page, but again got a code 200 with no redirects at all.
Thinking the site of foo was up again I tried again to load the page and foo's page with IE, netscape and Opera but always got:
"The page cannot be displayed"
Tried it a couple of times with the same result: LWP can fetch the page but browsers can not load any of the pages from foo's site.
Wanting to know more I typed in Google:
"site:www.foo1.com"
to get a huge load of pages listed, all constructed in the same way, such as:
www.foo1.com/some-important-keyword/www-some-good-site-com.html
Also I found some more of my own best ranking pages in this list and after checking the Google index all of those pages from my site has disappeared from the Google index.
None of all the pages found using "site:www.foo1.com" can be loaded with a browser but they can all be fetched with LWP and all of those pages are cached in their original form in the Google-Cache under the Cache-Link of foo
I have send an email to Google about this and am still waiting for a responds.
mail it to myself (to get a postmark date)
Be extra thorough: take the sealed envelope to a notary public, have them stamp and sign the envelope OVER the sealed flap, put a piece of glass-tape over the notary seal, and THEN have it sent to yourself via certified mail.
Or, better yet, register the contents of the CD with the copyright office. ;)
So, if this is the technique for hijacking, what is the actual trigger that causes google to think the hijacker's page is the real one? Is it the fact that the hijacker's page is identical? If so, couldn't you beat this by periodically varying the content of your own page (say, by altering body text slightly or changing page titles slightly)?
And this next statement I don't follow:
"Thats where I think the Google-Bug is, as soon as I change the content of my page, my page becomes the newer page and the hijacker becomes the older page ... so for the duplicate content filter the spammer becomes the oldest page on the net and the updated original page is deleted from the Google-Index"
What I don't follow is this: if he changes the content of his page, how can a duplicate content filter apply? They're no longer identical copies and, logically, there should be no duplicate content penalty?
But, back to the first question, what is it specifically that causes google to think the bogus page is the real one?
The trigger is that Google is treating a meta-refresh like a 302-Found (or Moved Temporarily) redirect. As described by the HTTP/1.x specifications, the effect of a 302 is to cause Googlebot to index the URL of the page containing the 302 with the content of the page the 302 redirects to.
So, changing your page content won't help, because it is your own content that is displayed with their URL.
This is the result of Google treating meta-refresh as a 302, rather than as a 301. With a 301, the new URL is indexed along with its content, so there is no problem.
I posted this "thinking out loud" in another thread, but let me try again: If anyone has had this happen to them, what is the effect if you add a 301 redirect to the end of this redirection chain? In other words, replace your hijacked page with a 301-Moved Permanently redirect to a copy of your page. I'm wondering if this final redirect might nullify the meta-refresh being treated as a 302, and cause Googlebot to decide that the final page URL should be listed with its content. There are risks, or course: It might be a good idea to only present the 301 to Googlebot, and not to other SE spiders if your ranking is OK in those other SE's listings. However, if someone is suffering from this problem, and has nothing to lose by trying it, this would be an interesting experiment.
I hope Google will fix the root problem, but this might be a work-around if Googlebot gives precedence to the last redirect they encounter.
Jim
markus007 -
I'm not sure either...as I'm not a lawyer. However, I'm sure there are actions you can take towards a company that knowingly allows other sites to do something that "knowingly" can hurt someones site resulting in a loss in revenue. This issue is not new... but within the past few weeks it has just BALLOONED on webmasterworld. I really do not think Google can say they had now idea.
And for the record, my site as well is a PR6 and I was brought to my knees by a lower PR site.
I don't think it is too much to ask for Google to respond. If they can't fix their algo. immediately... then they could AT LEAST provide a report page SPECIFICALLY for this issue and actually remove/penalize the offending sites/pages from the index! Is that really too much to ask?
This is one of the largest issues I've seen with Google. Yes, algorithm changes suck because it requires people to adjust their sites. But when something comes along that can destroy online companies that have been around for YEARS with high PR..etc and Google (& other search engines) not take immediate action - That is WRONG!
presuming my site is "www.example.com" and the hijacked page is:
[example.com...]
and/or
[example.com...]
To All Members:
1/ Can someone post any kind of Channel or e-maill address to point Google-Staff to this threat or where I can reach them.
e-mails to "webmaster at google.com" and "help at google.com" send last week are still unanswered.
2/ I am receiving tons of Sticky-Mails to send the URL of the hijacking-site, mails from posters in this threat but also many from non-posters.
I currently will not respond by sending the URL as I think there is some danger that, excuse me for this, would-be hijackers want to see the complete code this URL is using, being that the Google-bot and algo is currently so easely fooled by such a simple trick.
I still believe in Google and hope that in one or other way they will gey notified, fix the problem and close the door for all future uses of this trick.
[edited by: DaveAtIFG at 6:46 pm (utc) on Sep. 11, 2004]
[edit reason] Exemplified URLs [/edit]
There is no contract between your site and google therefor you have no grounds to sue
There is no need for a contract to exist. If your neighbor were to fill a giant water-gun with weed-killer and spray your lawn with it, you could sue. Strangely, neighbors are not required to sign contracts with each other.
Suing Google would force them to fix the problem. However, it would be expensive.
I think it's worth noting that at least one webmaster here has taken legal action with respect to copyright.
SUGGESTION
If you have no understanding of law, refrain from quoting it.
Kaled.
Precisely what happened to our sites.
Kaled.
If we agree to your suggestion not to disseminate MSB, the forum would be virtually empty. LOL
If anyone has had this happen to them, what is the effect if you add a 301 redirect to the end of this redirection chain? In other words, replace your hijacked page with a 301-Moved Permanently redirect to a copy of your page.
Geocities page =[meta refresh]> NoLongerActive.widget.com =[301]> www.widget.com
The above resulted in the Geocities page showing www.widget.com's title, description, cache, backlinks and PR. The real www.widget.com page was removed from Google and the most of my site was dropped in the same way Marcello described in post #1. I ended up 404ing NoLongerActive.widget.com which at least got www.widget.com back in Google but not the pages that were dropped.
E-mails to Google about the bug and the resulting penalty got me:
- 2 canned responses saying "we don't comment on site penalties"
- 1 canned response saying "we'll pass this on to our engineers"
- No reply when I asked for an update a month later (July)
This means there are still two parties left that can stop this abuse:
1. The offender.
2. The judge.
Contact the offender with the explicit instruction to remove all these refresh meta tags immediately.
If you don't get a response within let's say 1 business day, then take legal actions.
... wondering how fast Google will change its policy when the first lawsuits will be filed ;-)
the effect of a 302 is to cause Googlebot to index the URL of the page containing the 302 with the content of the page the 302 redirects to.
<?php
$location = 'ht*p://somesite.com/';
header('Location: ' . $location);
?>
The above returns a 302 also, from an url like ht*p://www.mysite.com/link.php?url=somepage. It is commonly (and innocently) used in php redirect scripts which sit in their own (normally unseen) file. I use it myself to help track outgoing clicks. Surely link.php isn't credited with the content of all the pages it links to?