homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

Hijackers & 302 Redirects
Not every site that uses redirects is trying to hijack your site.
internet ventures

 10:32 am on Mar 12, 2005 (gmt 0)

After being blamed for hijacking countless times every day, spending hours sending emails and trying to educate people as to exactly what this 302 redirect problem is and how to spot it I have decided to post the information to all at webmasterworld. Hopefully then people will stop wasting my time and there own time.

I would also like to point out that not every site that uses redirects is trying to hijack your site. Redirects have existed throughout the life of the internet without any problems. Webmasters that do redirects never needed to know what a 302 or 301 redirect was, they just used there preferred scripting language to redirect to another page for whatever reason they saw fit. Its only recently that redirects have started to have a detrimental affect on the target URL as far as Google and some other search engines are concerned.

When you do a inurl:www.example.com query on Google (to check for hijacking) not every website listed is actually hijacking your site. The way to tell if your site has been hijacked (intentionally or not) is do a inurl:www.example.com query on Google then any of the sites listed with exactly the same title, snippet and cache as your webpage/website are actually hijacking your site. If you click on the cache link next to the culprit you will see an exact copy of your site that resides on someone else's URL. That is hijacking (content theft).

The 302 redirect problem all boils down to duplicate content. When Google has the same title, snippet and cache (identical copy of your web page) for your web page and other pages in its index this trips a duplicate content filter. It's up to Google which page it gives credit to for that content. It's then only a matter of time before all other pages with that exact same content are shifted to the supplemental index.

Feel free to add any other comments, also if there are any moderators or senior members that can vouch for what I have said as being correct I would be very grateful.

[edited by: ciml at 12:04 pm (utc) on Mar. 13, 2005]
[edit reason] Examplified [/edit]



 1:00 pm on Mar 15, 2005 (gmt 0)

You are right. this 302 hijack paranoia is out of control. I did a"inurl" check of my site, had someone at this forum tell me my site had been hijacked (by a site listed in the "inurl" results}

I emailed the guy and found that he did not have a 302 on it nor a metarefresh and he was happy to take down the link.

The problem actually was with GOOGLE! They showed the title.. snippet and cache exactly as if it was one of my pages.

Of the 6 possible "302 hijackers", 2 were, but 4 were not


 8:51 pm on Mar 15, 2005 (gmt 0)

Internet Values,
Thanks for the lecture. But many of us who have been affected recognize the signs, cause and effect. Granted, I am seeing many posts that do not seem to be legit hijacking. Many posters are just blaming 302's for hijacking their site and subsequently getting booted from Google when in reality they may have been booted or filtered for other reasons.

But the hijacking problem does exist and I imagine Google is working toward a solution. A possible sign that they may be working on a solution is that the number of tracker2.php urls (identified as causing some of the hijacking a year or so ago) indexed in Google has decreased from over 400,000 in November to only about 15,000 today.

My home page disappeared from Google Last June, two weeks after 3 tracker2 urls pointing to that page were created by another webmaster. Shortly after my site disappeared, those Tracker2 urls began showing in a site:mysite.com search as if they were truely part of my site. Over time, the number of UNRELATED urls showing in the site: search increased. At one point, 20 unrelated urls were listed. Most of these are now gone (thanks to my own efforts of manual url removal and emails to the respective webmasters), but three remain. My site (all pages) is still gone from the serps and indexed as url only. The penalties for something I did not do are stiff and indefinite.



 2:05 am on Mar 16, 2005 (gmt 0)

Most of these are now gone (thanks to my own efforts of manual url removal and emails to the respective webmasters)

I'm a little confused here as I've never seen anybody spell out exactly what a webmaster is supposed to do if they are a hijack victim.

What do you mean by "manual url removal." Can you somehow remove the offending page? How do you do that?


 3:28 am on Mar 16, 2005 (gmt 0)

Using the Google URL Removal tool you can have any url that redirects to your page(s) removed from the Google index. You submit the urls that redirect to your page to the url removal tool. The removal tool will instantly verify that the metarobots tag on the destination page is set to "noindex" (to prevent your pages to be removed by just anyone). But, once the url is submitted and accepted (matter of seconds) you immediately change the metarobots tag back to "index" or you risk that page being dropped next time Googlebot crawls.



 8:28 am on Mar 16, 2005 (gmt 0)

inurl: is not really the way to find hijacks, that only looks for the keyword 'in the URL'

Do site:www(dot)yoursite there should be nothing but your own pages listed there with your own urls, if you see one with your title and description but another url that page is jacking you, wether intentionally or not.

then do link:yoursite These are pages linking to you.
none of your own pages should be in there - if so they are probably related to the same ones jacking you in site:yoursite


 8:59 am on Mar 16, 2005 (gmt 0)

Note: this is a good test but the fix is a different matter.
If it's a page displaying your page within a frame a simple framebuster can fix it.
For a dynamic link, before asking them to remove it I would folow the steps outlined earlier in this thread.

1 add a noindex tag to the page being hijacked
2 use the URL removal tool
3 ask the other site to remove the link.
4 remove the noindex tag

This allows you to cause a change in the directory before the circle is broken else you will have to wait for the next crawl to see if it worked.

internet ventures

 1:00 pm on Mar 16, 2005 (gmt 0)

Thanks crobb305,

You have provided me with an excellent bit of information that I haven't seen mentioned anywhere.

Of course you can use Googles URL removal tool to remove the pages. I just never thought of that. Google does after all think that the contents of your page belong on the hijackers page. So adding a noindex tag should confirm to google that the removal request is genuine.


 1:55 pm on Mar 16, 2005 (gmt 0)

Why is this problem unique to Google? We can either have thousands of webmasters running around contacting other webmasters to fix something that is unique to Google or... Google could get their act together on this one.

Let's me think, which is better for society...

While some of the more aggressive webmasters here might be willing to police this for Google, I'm not. It's their problem, they will fix it or suffer loss of market share.

Our relationship to Google is interesting, they love us because we point out problems with their system. They fix these problems because it affects the results and they need loyal "customers." They don't fix things to make life for us easy unless it affects their results too.

This is embarrassing for Google, they will fix it. They a great search engine, obviously the fix is not so simple and they are probably trying to make sure the fix does not cause another problem elsewhere.


 2:34 pm on Mar 16, 2005 (gmt 0)

Well put BillyS, but I suspect it will be quite a while until Google gets their act together.

In the meantime many small businesses which rely on traffic from Google (whose results are getting worse and worse) will suffer as the blind masses still believe that Google is feeding them good results.

I think it is imperative that we find a workable solution for webmasters whose sites have been hit hard by these redirects.

internet ventures

 2:45 pm on Mar 16, 2005 (gmt 0)

From what I have seen on these forums other search engines do suffer this problem just not to the same degree.

I think its a big problem on Google because of how they handle duplicate content. The last few algorithm updates seem to have revolved around duplicate content, that coupled with the redirect problem has hurt innocent web masters.

Now when Google comes across pages with an extremely high percentage of duplicate content those pages seem to get put into the supplemental index.

So your pages may have been suffering from the 302 redirect/hijack problem for some time (year+), then when Google decided not to tolerate highly duplicate content your page got shifted to the supplemental index/penalized.



 7:49 pm on Mar 16, 2005 (gmt 0)

Regarding the Google url removal tool:

It works well for removing redirects to your pages because the removal program will check the metarobots tag on the DESTINATION page (i.e., your page). The way I do the removals is: open one IE window and run inurl:mysite.com. Then open a new window and get into the Google url removal account. Then, set the metarobots on the destination page to "noindex" and submit the corresponding redirect url to the removal tool. Within seconds you will get a "success" if the program was able to detect the "noindex" on your page. IMMEDIATELY change the metarobots back to "index" (unless you have a few other urls to the same page you want to submit). If you forget to change the metatag back, you risk having the intended page removed by all SEs. From start to finish, it literally takes 30 seconds for a single url.

Regarding the term "hijacking":

Just because other redirects are showing on an inurl:mysite.com search does NOT mean your page has been "hijacked". The inurl: command is simply showing the urls that happen to have the term in it's url. For my site and many others who were truely a victim of hijacking stemming from the tracker2.php scripts, the redirect urls were actually showing up in a site:mysite.com search. Google actually thought those unrealted urls were part of my site. Yes, those same urls were showing in the inurl: search, but that search in-and-of itself proves nothing, not to me anyway. Other members may have some input to the contrary.


internet ventures

 8:04 pm on Mar 16, 2005 (gmt 0)

crobb305, Regarding the term "hijacking":

Thats the exact point I wanted to get accross when I first started this thread and now its been confirmed.


 8:07 pm on Mar 16, 2005 (gmt 0)

Internet Ventures. You are correct. And I apologize for my snappy reply on page one. I was in a mood that day. :)



 5:26 am on Mar 17, 2005 (gmt 0)

[quote]Within seconds you will get a "success" if the program was able to detect the "noindex" on your page.[\quote]

Well I don't know why it's taking longer with mine but it told me the url would be removed within 24 hours. The status bar on the right says "pending." So now I have to wait with the no index meta tags sitting on my page. Good thing I only excluded Google:



 5:35 am on Mar 17, 2005 (gmt 0)


The "within seconds" that I mentioned was for the submit and to the point where Google says "success". Once you get the "success" you can then change your metatag back to "index". You do not have to keep the tag at "noindex". I literally change the metarobots tag back to "index" within seconds of getting the "success". The url will be set to pending and will be removed within 24 hours, even if you have already changed your tag back.

I would quickly change it back if I were you or you will risk the intended page being removed from Google.


 7:34 am on Mar 17, 2005 (gmt 0)

Oh, thanks. I'll change it back.


 10:57 am on Mar 17, 2005 (gmt 0)

But even if you use this method to remove the URL, won't googlebot just re-index it again if its still on the hijackers website.

How do you get them to remove it from their website if you can't contact them?

internet ventures

 11:36 am on Mar 17, 2005 (gmt 0)

I think in many cases google will not reindex the redirect again or if it does it will maybe take a few months for googlebot to get round to it. The reason I think this is due to the fact that the pages with the redirects on are sometimes many links deep into a website so I doubt they get indexed frequently, also sometimes the links are no longer present on directory type sites as there listings quite often change and your website may not be listed anymore.

With a bit of luck Google may even be able to deal with problem by then.

but yes it would also be advisable to request that the site in question removes the redirect link too. If you cannot contact the site in question then contact there host provider.


 12:03 pm on Mar 17, 2005 (gmt 0)

Has anybody tried to file a DMCA with Google for infringement of copyright on this issue? They will then remove the offender from the index.


This is definately a copyright infringement case because they are hijacking your content word for word. It might also get Google to sit up and take notice of the extent of this problem if everybody who is affected files a DMCA.

Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved