Welcome to WebmasterWorld Guest from 54.221.87.97

Forum Moderators: open

Message Too Old, No Replies

Dupe content checker - 302's - Page Jacking - Meta Refreshes

You make the call.

     

Marcello

11:35 am on Sep 7, 2004 (gmt 0)

10+ Year Member



My site, lets call it: www.widget.com, has been in Google for over 5-years, steadily growing year by year to about 85,000 pages including forums and articles achieved, with a PageRank of 6 and 8287 backlinks in Google, No spam, No funny stuff, No special SEO techniques nothing.

Normally the site grows at a tempo of 200 to 500 pages a month indexed by Google and others ... but since about 1-week I noticed that my site was loosing about
5,000 to 10,000 pages a week in the Google Index.

At first I simply presumed that this was the unpredictable Google flux, until yesterday, the main index-page from www.widget.com disappeared completely our of the Google index.

The index-page was always in the top-3 position for our main topics, aka keywords.

I tried all the techniques to find my index page, such as: allinurl:, site:, direct link etc ... etc, but the index page has simply vanished from the Google index

As a last resource I took a special chunk of text, which can only belong to my index-page: "company name own name town postcode" (which is a sentence of 9
words), from my index page and searched for this in Google.

My index page did not show up, but instead 2 other pages from other sites showed up as having the this information on their page.

Lets call them:
www.foo1.net and www.foo2.net

Wanting to know what my "company text" was doing on those pages I clicked on:
www.foo1.com/mykeyword/www-widget-com.html
(with mykeyword being my site's main topic)

The page could not load and the message:
"The page cannot be displayed"
was displayed in my browser window

Still wanting to know what was going on, I clicked " Cached" on the Google serps ... AND YES ... there was my index-page as fresh as it could be, updated only yesterday by Google himself (I have a daily date on the page).

Thinking that foo was using a 301 or 302 redirect, I used the "Check Headers Tool" from
webmasterworld only to get a code 200 for my index-page on this other site.

So, foo is using a Meta-redirect ... very fast I made a little robot in perl using LWP and adding a little code that would recognized any kind of redirect.

Fetched the page, but again got a code 200 with no redirects at all.

Thinking the site of foo was up again I tried again to load the page and foo's page with IE, netscape and Opera but always got:
"The page cannot be displayed"

Tried it a couple of times with the same result: LWP can fetch the page but browsers can not load any of the pages from foo's site.

Wanting to know more I typed in Google:
"site:www.foo1.com"
to get a huge load of pages listed, all constructed in the same way, such as:
www.foo1.com/some-important-keyword/www-some-good-site-com.html

Also I found some more of my own best ranking pages in this list and after checking the Google index all of those pages from my site has disappeared from the Google index.

None of all the pages found using "site:www.foo1.com" can be loaded with a browser but they can all be fetched with LWP and all of those pages are cached in their original form in the Google-Cache under the Cache-Link of foo

I have send an email to Google about this and am still waiting for a responds.

Marcello

9:20 am on Sep 10, 2004 (gmt 0)

10+ Year Member



After further investigation I have now found out that foo.net hijacks good-performing pages with a simple "http-equiv=refresh" Meta-Tag.

Using LWP here is how the HTML-head of their pages looks like:
<html>
<head>
<title>Copied Title of the Hijacked Page</title>
<meta http-equiv="refresh" content="0; url=http://www.widget.com/">
<meta name="robots" content="follow, noindex">
</head>
<body bgcolor="#FFFFFF" text="#000000">
..... Further HTML Code .....

I simply can not understand why Google replaces good-performing index-pages like my:
"www.widget.com"
by a Meta-Refresh (redirect) page as:
"www.foo.net/some-keyword/www-widget-com.html"

This is really a huge BUG in Google's algo

kaled

11:27 am on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member kaled is a WebmasterWorld Top Contributor of All Time 10+ Year Member



This subject has been discussed at depth but you may have discovered a new variation.

[webmasterworld.com...]

There was also a more recent thread but I don't have a link to it.

It sounds as though the site may be cloaking so that it only delivers content to Googlebot and refuses browsers - either that or the site is offline.

Try the following link. You may try setting the user agent to googlebot to see what response you get. Alternatively, using Firefox¦about:config you can change the user agent strings I think - never tried it.
http://www.rexswain.com/httpview.html

Hope this helps

Kaled.

Macro

11:37 am on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



This sort of activity has got to be criminal. Why isn't it? Any webmaster who's devoted considerable time and effort (not to say expense) on his site is vulnerable and the perpetrators pay no cost. Their throwaway domain will be, er, thrown away when they are caught and the only loser is the innocent party.

DaveN

11:38 am on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



This is really a huge BUG in Google's algo .... simple answer is YES it is....

DaveN

Macro

11:42 am on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



simple answer is YES it is.

And they've apparently known about it for ages (going by other posts and webmasters who've complained).

Why haven't they done anything about it? We know their penchance for automated solutions over hand editing but considering that it's "BIG" problem surely they'd have found an automated solution with some alacrity?

webdude

12:07 pm on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



The other thread...

[webmasterworld.com...]

And after searching I found many threads, all with different variations of the same problem.

[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]
[webmasterworld.com...]

Many more that were kind of related. And lots of stuff on the Yahoo forum too.

So what does it take to fix this? I could very easily do this to all my competition, not to mention all my outgoing links.

Macro

12:10 pm on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I could very easily do this to all my competition

Apparently so. And all you need is a throwaway domain with higher PR than theirs. No html, php or hacking skills required. Simply cut and paste a meta tag from the examples and replace the URL in it.

I just went through all those same threads that you found and it seems the problem dates back to last year (or, at least, Jan this year).

Google, how come you are so silent on this?

Critter

1:34 pm on Sep 10, 2004 (gmt 0)

10+ Year Member



Google could probably prevent this sort of thing by assigning a "priority date" to pages it crawls. If googlebot then finds duplicate content on the net it would regard the older source as authoritative and remove the *newer* duplicate content.

Google's engineers may have been drinking decaf at that meeting. :)

Macro

1:47 pm on Sep 10, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



If googlebot then finds duplicate content on the net it would regard the older source as authoritative

That's what I thought they did till I saw one of my internal pages copied verbatim and the copied page ranking higher than ours in SERPS (the copy does have a higher PR). <sigh> Just sent them a cease and desist.

This 389 message thread spans 39 pages: 389