Forum Moderators: open

Message Too Old, No Replies

Dupe content checker - 302's - Page Jacking - Meta Refreshes

You make the call.

         

Marcello

11:35 am on Sep 7, 2004 (gmt 0)

10+ Year Member



My site, lets call it: www.widget.com, has been in Google for over 5-years, steadily growing year by year to about 85,000 pages including forums and articles achieved, with a PageRank of 6 and 8287 backlinks in Google, No spam, No funny stuff, No special SEO techniques nothing.

Normally the site grows at a tempo of 200 to 500 pages a month indexed by Google and others ... but since about 1-week I noticed that my site was loosing about
5,000 to 10,000 pages a week in the Google Index.

At first I simply presumed that this was the unpredictable Google flux, until yesterday, the main index-page from www.widget.com disappeared completely our of the Google index.

The index-page was always in the top-3 position for our main topics, aka keywords.

I tried all the techniques to find my index page, such as: allinurl:, site:, direct link etc ... etc, but the index page has simply vanished from the Google index

As a last resource I took a special chunk of text, which can only belong to my index-page: "company name own name town postcode" (which is a sentence of 9
words), from my index page and searched for this in Google.

My index page did not show up, but instead 2 other pages from other sites showed up as having the this information on their page.

Lets call them:
www.foo1.net and www.foo2.net

Wanting to know what my "company text" was doing on those pages I clicked on:
www.foo1.com/mykeyword/www-widget-com.html
(with mykeyword being my site's main topic)

The page could not load and the message:
"The page cannot be displayed"
was displayed in my browser window

Still wanting to know what was going on, I clicked " Cached" on the Google serps ... AND YES ... there was my index-page as fresh as it could be, updated only yesterday by Google himself (I have a daily date on the page).

Thinking that foo was using a 301 or 302 redirect, I used the "Check Headers Tool" from
webmasterworld only to get a code 200 for my index-page on this other site.

So, foo is using a Meta-redirect ... very fast I made a little robot in perl using LWP and adding a little code that would recognized any kind of redirect.

Fetched the page, but again got a code 200 with no redirects at all.

Thinking the site of foo was up again I tried again to load the page and foo's page with IE, netscape and Opera but always got:
"The page cannot be displayed"

Tried it a couple of times with the same result: LWP can fetch the page but browsers can not load any of the pages from foo's site.

Wanting to know more I typed in Google:
"site:www.foo1.com"
to get a huge load of pages listed, all constructed in the same way, such as:
www.foo1.com/some-important-keyword/www-some-good-site-com.html

Also I found some more of my own best ranking pages in this list and after checking the Google index all of those pages from my site has disappeared from the Google index.

None of all the pages found using "site:www.foo1.com" can be loaded with a browser but they can all be fetched with LWP and all of those pages are cached in their original form in the Google-Cache under the Cache-Link of foo

I have send an email to Google about this and am still waiting for a responds.

Marcello

4:10 am on Sep 17, 2004 (gmt 0)

10+ Year Member



While I was sleeping......

Woke up this morning to find an answer from Google to my DCMA complaint about the duplicate copy of my Index-Page at foo.com.

The answer comes down to:
Google has removed "the page in question" from the Google-Cache and the archived information for this page ... but searches for widget will still return this page's title and URL until our bot has revisit this page again.

On the demand why "My Index-Page" was banned and replaced by "My Index-Page under foo's URL", the answer is that individual responses are not given, followed by the canned text that eventual penalizations can have many causes, followed by the list of "What Not To Do" as stated in the Google guidelines.
(for which I dont worry as my site is clean)

Furthermore, due to my DCMA complaint to other Mayor Search-Engines, the complete site of "foo.com" is since this morning (my local time) gone from their SERPS and "My Index-Page" under "My URL" has reappeared in its former glory-position.

Things are starting to move .....

webdude

4:29 am on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



What this entire post comes down to is greed.

You could not be further from the truth if you tried. For me, there is no money involved. The site that I keep referring to in my posts is a hobby site. Unfortunately, when Brett took his chainsaw to this thread, he deleted one of my posts. But I will repeat what I have stated before...

I don't have any competition. This site does not sell products and I receive no monies from any party. All development is done on my own time at no cost to anyone but myself. I just had an urge to start something that I thought would be kind of cool.

So where in the heck is the greed in that?

And, by the way, no I will not stoop to the black hat measures of others to get even. The black hat sites eventually will go down. Past experience has shown that. And I am not going to get caught in that trap. As stated before, google will eventually fix this. I do have some faith. It just takes time.

Besides, I like to sleep good at night.

Greed huh?

dirkz

7:53 am on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



So DMCA complains work to get rid of redirect spammers stealing your hits?

quotations

9:30 am on Sep 17, 2004 (gmt 0)

10+ Year Member



So DMCA complains work to get rid of redirect spammers stealing your hits?

It might work on Google but has not so far for me.

On Yahoo, they did a manual permanent ban on my site because I filed the complaint and then wrote to tell me they had done it.

kaled

12:03 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



On Yahoo, they did a manual permanent ban on my site because I filed the complaint and then wrote to tell me they had done it.

That is the perfect example of why a code of conduct is required for search engines, together with an independent complaints body. This example also suggests that legislation is required to back it up.

If that happened to me, I would do my best to kick up a big stink on principle.

Kaled.

webdude

12:11 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



After reading more threads from other forums and some articles on the meta refresh and 302 problem, and because of the way this situation has played out when it comes to my site, I think some of you may be right. There seems to be some sort of cloaking or other tricks going on.

So this brings up another question.

Is there a way to check if a site is cloaking? I read somewhere that you can use firefox, but it is not very effective. I would love to be able to see what googlebot sees when crawling these offending links.

If anyone has any suggestions or knows how this could be done, I would appreciate it. I already tried some spidering tools, different header and html checkers, but the results are pretty much the same. Some of the spider tools just crawl the post 302 or meta refresh link, some tell me they are being blocked.

Marcello glad you are getting some results for your efforts. Keep us informed.

?

dirkz

1:26 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



> Is there a way to check if a site is cloaking?

Look at the google cache, sometimes it's obvious. You can also try to disguise as Googlebot and access the site, but this only works for user-agent based cloaking.

webdude

1:33 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I am looking at the cache. The cache for the meta refresh links show my home page. Is that proof that it is being cloaked? Or does that mean that googlebot crawls the first link to the second link and assumes it's the correct page.

As for the 302s, they show my homepage as well when I view cache, even though the 302 is pointed to the offending sites home page. That is what got me going on the cloak thing in the first place.

my2cents

2:46 pm on Sep 17, 2004 (gmt 0)

10+ Year Member



A question and a request.

Request:
Can someone sticky me with an example of a site (any site will do) that is currently being hijacked? I have a friend that is very interested in this topic and I’m trying to gather as much information for her as possible. She is an attorney and believes there may be something here.

Question:
If the date of the file determines the age and ranking of the page when being hijacked, couldn’t you simply adjust your date on that page (file) to be older than the hijacker’s page?

Thanks!

dirkz

3:21 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I clarify:
Cache says it's your page, but when you click on the search result link and get redirected somewhere else (not your page) than it's cloaked.

webdude

3:29 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Thanks dirkz.

Is there any way to see the cloaked page?

dirkz

3:40 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



> Is there any way to see the cloaked page?

curl -i -A "Googlebot/2.1 (+http://www.googlebot.com/bot.html)" [widget.com...]

(if you have a Unix compatible box and the cloaking is UA based)

Otherwise, you can fiddle around with the UA of your favorite browser (if it's not IE).

webdude

3:46 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I'll give it a try.

DaveAtIFG

4:43 pm on Sep 17, 2004 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



You are unlikely to see anything webdude.

There are two basic approaches to cloaking, one is user-agent based and easily circumvented, the second is IP based and is extremely difficult to penetrate. I'm confident you would need to visit this hijacker site using a googlebot IP to penetrate the cloak. (High quality cloaks often rely on BOTH IP and user-agent checks.)

Google's cache of a page displays exactly what googlebot saw when they spidered the site. If a hijacked site appears in Google's cache, rest assured that cloaking is involved in this hijack. When googlebot visited, probably identified by IP address and possibly confirmed by user-agent, it was served your URL.

Typically, a site using cloaking to serve their own pages (not a hijacker, a simple cloaker) will include a "nocache" metatag on their cloaked pages, to avoid having those pages cached and possibly reveal their cloaking. Since a hijacker is hijacking/cloaking your site, the hijacker cannot add a nocache metatag to your site without hacking into it.

Page jacking has been going on for many years, this is simply a more advanced technique. Is it Google's responsibility to fix it? Not necessarily. But Google's webmaster guidelines are very clearly against cloaking.

There have been similar reports in the Yahoo forum suggesting redirects cause problems. It reportedly effects some sites and not others. I suspect many of these reports are also related to this page jacking trick. And Yahoo webmaster guidelines mirror Google's regarding cloaking.

As Marcia suggested in #121, "There has to be a reason and a reward for doing something like this," so high traffic sites will be the most likely targets and that probably explains why some sites are effected and not others.

Maia

4:57 pm on Sep 17, 2004 (gmt 0)

10+ Year Member



OK, so you are saying there is no way my site would have shown up in the cache from another site simply using a meta-refresh and/or 302 redirect to my site?

Because, my site did appear in the cache, but the page was redirecting to my index page. Once they removed the link to me, my page still appeared in the cache, but the page was cached before the link was removed.

Patrick, if you are still following this at all, did you check the cache on the pages you unintentionally hijacked?

This 389 message thread spans 26 pages: 389