homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 467 message thread spans 16 pages: < < 467 ( 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 > >     
Google's 302 Redirect Problem

 4:17 pm on Mar 25, 2005 (gmt 0)

(Continuing from Google's response to 302 Hijacking [webmasterworld.com] and 302 Redirects continues to be an issue [webmasterworld.com])

Sometimes, an HTTP status 302 redirect or an HTML META refresh causes Google to replace the redirect's destination URL with the redirect URL. The word "hijack" is commonly used to describe this problem, but redirects and refreshes are often implemented for click counting, and in some cases lead to a webmaster "hijacking" his or her own URLs.

Normally in these cases, a search for cache:[destination URL] in Google shows "This is G o o g l e's cache of [redirect URL]" and oftentimes site:[destination domain] lists the redirect URL as one of the pages in the domain.

Also link:[redirect URL] will show links to the destination URL, but this can happen for reasons other than "hijacking".

Searching Google for the destination URL will show the title and description from the destination URL, but the title will normally link to the redirect URL.

There has been much discussion on the topic, as can be seen from the links below.

How to Remove Hijacker Page Using Google Removal Tool [webmasterworld.com]
Google's response to 302 Hijacking [webmasterworld.com]
302 Redirects continues to be an issue [webmasterworld.com]
Hijackers & 302 Redirects [webmasterworld.com]
Solutions to 302 Hijacking [webmasterworld.com]
302 Redirects to/from Alexa? [webmasterworld.com]
The Redirect Problem - What Have You Tried? [webmasterworld.com]
I've been hijacked, what to do now? [webmasterworld.com]
The meta refresh bug and the URL removal tool [webmasterworld.com]
Dealing with hijacked sites [webmasterworld.com]
Are these two "bugs" related? [webmasterworld.com]
site:www.example.com Brings Up Other Domains [webmasterworld.com]
Incorrect URLs and Mirror URLs [webmasterworld.com]
302's - Page Jacking Revisited [webmasterworld.com]
Dupe content checker - 302's - Page Jacking - Meta Refreshes [webmasterworld.com]
Can site with a meta refresh hurt our ranking? [webmasterworld.com]
Google's response to: Redirected URL [webmasterworld.com]
Is there a new filter? [webmasterworld.com]
What about those redirects, copies and mirrors? [webmasterworld.com]
PR 7 - 0 and Address Nightmare [webmasterworld.com]
Meta Refresh leads to ... Replacement of the target URL! [webmasterworld.com]
302 redirects showing ultimate domain [webmasterworld.com]
Strange result in allinurl [webmasterworld.com]
Domain name mixup [webmasterworld.com]
Using redirects [webmasterworld.com]
redesigns, redirects, & google -- oh my [webmasterworld.com]
Not sure but I think it is Page Jacking [webmasterworld.com]
Duplicate content - a google bug? [webmasterworld.com]
How to nuke your opposition on Google? [webmasterworld.com] (January 2002 - when Google's treatment of redirects and META refreshes were worse than they are now)

Hijacked website [webmasterworld.com]
Serious help needed: Is there a rewrite solution to 302 hijackings? [webmasterworld.com]
How do you stop meta refresh hijackers? [webmasterworld.com]
Page hijacking: Beta can't handle simple redirects [webmasterworld.com] (MSN)

302 Hijacking solution [webmasterworld.com] (Supporters' Forum)
Location: versus hijacking [webmasterworld.com] (Supporters' Forum)
A way to end PageJacking? [webmasterworld.com] (Supporters' Forum)
Just got google-jacked [webmasterworld.com] (Supporters' Forum)
Our company Lisiting is being redirected [webmasterworld.com]

This thread is for further discussion of problems due to Google's 'canonicalisation' of URLs, when faced with HTTP redirects and HTML META refreshes. Note that each new idea for Google or webmasters to solve or help with this problem should be posted once to the Google 302 Redirect Ideas [webmasterworld.com] thread.

<Extra links added from the excellent post by Claus [webmasterworld.com]. Extra link added thanks to crobb305.>

[edited by: ciml at 11:45 am (utc) on Mar. 28, 2005]



 7:21 pm on Apr 24, 2005 (gmt 0)

EFV - again similar.

My homepage is positioned well and when I do searches I seem to be positioned OK for the words I look for my site on.

It definetly is the 100s of keyword combos that I could never think of that my site seems to be struggling on.


 8:04 pm on Apr 24, 2005 (gmt 0)

It definetly is the 100s of keyword combos that I could never think of that my site seems to be struggling on.

Good point. I tend to think in terms of pages, not keywords or keyphrases, and I often forget that people search not only on "red widgets," but also on "crimson widgets" or "widgets with red coatings" or "les widgets rouges."


 12:33 am on Apr 25, 2005 (gmt 0)

I did get some fresh dates today, but it has not added more pages to the index and it looks like it spiders the same pages everytime, but OK I also was hit by hijackers and 302 googlebug, I think I will wait 1-2 month then I will transfer the whole 3000 pages to a new domain and start creating scrapers it looks like its the future way to create sites.


 1:06 am on Apr 25, 2005 (gmt 0)

yeah Zeus maybe you should scrape your own site and the scraper site will be right on top.

You guys not getting all the different keyword combinations anymore. could this be because google is using META descriptions now?


 11:12 am on Apr 25, 2005 (gmt 0)

Ahh it realy hurts everytime a seach ressult for my main keyword on google.com.my shows up in the logs, it takes me back to before the hijacking time.

Why is it that sometime I see in google.com.my my old rankings from before the hijacking/302bug its a little wierd and can I get anything out of this, like now I know that google is still filtering my site because it was hijacked...


 11:43 am on Apr 25, 2005 (gmt 0)

>>>>You guys not getting all the different keyword combinations anymore. could this be because google is using META descriptions now?

Would not have thought so. Although Google is displaying the Meta description as the snippet if it includes the Keywords - I cant see this as a detrement so the rest of the content on the site does not get ranked.


 5:35 pm on Apr 25, 2005 (gmt 0)

Hopefully this is falling in line with the start of this subject matter.

A website under our control is on a Windows Server so we can not place in an .htaccess file.
The website is a allyoursite.co.uk but used to be many years ago allyoursite.com and was hosted in the States. As it is only strictly for the UK market we put it to .co.uk on the request of the request of the owners about a year ago. We gave .com a meta-refresh to .co.uk. We could not drop .com entirely as their email system is based on .com and they did not want to lose this for emails.
We notice lately strange things happening. (This is what you mind consider a well optimised site with a lot of really good highly relevant and quality inbound links because of the historical nature of the website. They spare no expense at doing the right thing and really support the ethical behaviour of good SEO work.)
We notice that no back links show up except for a few. We also notice a slip down the engines as a result. All inner pages are fine and rank quite well.

We decided that maybe the root of the problem is possibly the .com so as we cannot remove it entirely because of the email system we decided to give it it's own page and implement into this page NOINDEX,NOFOLLOW tags and a piece of text directing any would be visitor that they should go to .co.uk
When we did this we noticed that the .com had a page rank of 4/10 and so did the .co.uk have a page rank of 4/10 this set alarm bells off that Google had been indexing both pages.

The very next day, the entire index page had disappeared from Google and so had the .co.uk index page all that was left were inner pages that still held their rank and palce in Google.

We had no choice but to put things back as they were and sure enough it got back to normal the next day except that Google had decided to pick up the DMOZ description for the website instead. It eventually changed back to the right description.

There are still problems with this and the owner wants it fixed, we are out of ideas. Last page rank nothing happened for us and still none of the quality back-links are showing.

Today, I went back and implemented the separate page again this time without the NOINDEX,NOFOLLOW tags. I will find it out if this will help at all, but somehow we do not think so.

Would it help if I moved the .com to an Apache Server and implemented the .htaccess telling it not to index the .com but .co.uk - is this right?

Thanks for any help in this matter


 6:08 pm on Apr 25, 2005 (gmt 0)


Put "noindex,follow" on the ".com" page along with one single straight text link to the "co.uk". Don't do the meta refresh, delete that.

You might want to add a line of text for visitors stating that the new address is "co.uk" and that they should correct their bookmarks as well as links. In fact, not doing the meta refresh will make some people change links that otherwise wouldn't bother.

What about all the old pages on the ".com" domain - do they show 404 or what? If so, no problem. If they meta refresh to the front page of the new site (or any other page), change as outlined above.

  • Straight text link,
  • noindex,
  • follow.

... and no more than that. It's not the rocket science of SEO or web design, but it's safe.


 6:28 pm on Apr 25, 2005 (gmt 0)

Yes, Claus is right. You do want the search engine to follow the link, but not index the starting page.

As for 301 redirects, you can do this on Windows servers. Do a search for "IIS rewrite 301" or "ISAPI redirect 301" or "windows redirect 301".


 7:06 pm on Apr 25, 2005 (gmt 0)

I was hijacked - 302 redirects.
Lost all serps over the span of a few weeks.
in desperation, and missunderstanding advice here at Webmasterworld, I removed the url from google.

BAD mistake!

I have a 301 redirect to 'www.mydomain' from 'my domain'.

The google bots are eating up all my pages - visited daily, and the logs show very thorough roboting of the entire sites.

My question is this: Is there anyway to actually GET indexed now, without waiting out the 6 months? (I did send an email to google asking this, but received in response a copy/paste that didn't address the question)

Also, since my site isn't new it has many incoming linke - when google does reindex, will it see me as a new site and penalize for so many inbound links, or will it remember that the site has been there a long time?


 7:10 pm on Apr 25, 2005 (gmt 0)

>> I did send an email to google asking this, but received in response a copy/paste that didn't address the question <<

The quality of Google responses is abysmal. I have several friends who, upon having written to Google about a particular topic have received complete nonsense as replies. Not once, but a dozen times in the last 6 weeks or so.

As a test we set up four people to ask exactly the same question via their support form. The questions were word for word identical except for the URL in question. The four answers received (summarised here), ranged from "definately not", and "no", to "yes", and "absolutely yes".

I kid you not.


 7:32 pm on Apr 25, 2005 (gmt 0)

In a SE term, how can it be that after a site was hijacked and had maybe 10-20 googlebug 302 coping the frontpage content, that googlebot will not visit more then 1-5% of the site, I cant see the logic here.

If a site owner had removed all the hijackers and 302, with the removetool or other way, why is it that the site dont reapeare or googlebot comes back.


 9:13 pm on Apr 25, 2005 (gmt 0)

I thought I'd never see the day when big G acknowledges 302 problem and confirms that it is a consequence of site being weakened by some other penalty. But first thing first – GG, you need to have a premium tech support – every decent software company has one! I don't even want to mention the types of replies I got from google trying to deal with my site's falling out of index.

Back in December 2004 I lost all my ranking and in a drastic attempt to address dup. content penalty I tried to get rid of all the pages indexed under wrong domain name (without www) using the infamous URL console. As I soon learned the URL console treats both domains w/ and w/o www as being the same. I can’t think of any reasoning used by google’s engineers in deciding to remove pages from both domains when in fact only one is submitted and all associations between the two (meaning 301 or 302 redirects) are non-existent. I wonder if there is something special about www prefix or the tool always removes pages from the root domain as a “feature”. If the letter is true then one would be able to remove some serious players who allow hosting under their root domain like dyndns. org for example.

Anyways, it is been more then 90 days and of cause my site is still out. GoogleGuy, I'll gladly give you my last name, forum handle, SSN and even my mother’s maiden name just to get my site re-enabled. Heck, I will even stop replacing google search with MSN as a home page on all the computers I can get my hands on :)

I’ve been sending re-inclusion requests every few weeks without any success. GG, many of us would greatly appreciate if you could post the exact steps/procedure to submit "accidentally" removed sites for re-inclusion


 9:26 pm on Apr 25, 2005 (gmt 0)

Quick question for GoogleGuy and all in the forum.

Is it okay to use redirects for statistics purposes when the redirect link goes through your cgi-bin AND you block all robots from links to your cgi-bin in your robots.txt file?


 9:39 pm on Apr 25, 2005 (gmt 0)

The penalized/hijacked site I resubmitted last week (as GG recommended) was crawled heavily last thurs, then went from 7 to pr0 on the pagerank update. There are 7000 back links so the pr0 to me is indicative of penalty eventhough PR may still fluctuate over the next few days. The site has been out for a year now. I guess it's gone forever.


 9:52 pm on Apr 25, 2005 (gmt 0)

The point is you have ben fully crawled again and you still have the links to you, then forget about the toolbar, just be happy about your crawl, maybe I also have to wait for a year before I get fully crawled again, I hope google changes the text where it says no other site can harm you.


 10:26 pm on Apr 25, 2005 (gmt 0)


I got your sticky yday. About 80% of my pages were spidered between from the 22nd to 23rd. This is the first time anything other than index page has been accessed in at least 2 months. BUT, none of those pages are indexed with anything other than url. I would have thought those data would have been update to title/desc listings within 24 hours of the crawl.

Not sure what is up. But at this point I guess I am sorta giving up hope.



 10:30 pm on Apr 25, 2005 (gmt 0)

it always takes 2 days after a spidering, so tomorrow you will see results or late today and dont think about the toolbar PR.


 10:47 pm on Apr 25, 2005 (gmt 0)

Zeus its been 3 days now. I got a response from my reinclusion request telling me my site was already indexed.



 10:55 pm on Apr 25, 2005 (gmt 0)

Is it okay to use redirects for statistics purposes when the redirect link goes through your cgi-bin AND you block all robots from links to your cgi-bin in your robots.txt file?


 10:58 pm on Apr 25, 2005 (gmt 0)

Crobb ok that sounds a little wierd, but ok what is normal in our situation, but still I think you will see better time because googlebot was on your whole site.

Another thing anyone here have a explanation SE related, why a site that has been hijacked or hurt by the google bug 302, dont get spidered by googlebot, when there is no hijacker/302 sites left in a site:search, I dont see any logic here, in a way I dont think its a dublicated filter, because as it looks there is no dublicated sites left in a site:search and contact google to request a respidering is a wast of time as crobb says. It can not be we have to wait to MSN takes over mid 2006.


 10:59 pm on Apr 25, 2005 (gmt 0)

There are no pages in the .com just the index page. I ahve now done what you have advised and removed meta-refresh. I anmed .asp to .aspOLD and placed in a index page:

It has a line of text on there telling poeple site is now at the .co.uk and follow the link.

Hope this works.


 11:00 pm on Apr 25, 2005 (gmt 0)

Nosmada - It should be ok when you write a robots.txt, but I got hijacked in nov. by a site that had a noindex meta on there site, so who knows these days, but in real SE life it should be ok.


 11:01 pm on Apr 25, 2005 (gmt 0)


Unless there is something I am not seeing, my site appears to be gone forever from Google. Although deep spidered last week, it has gone from pr7 to pr0. All duplications are gone, all 302s gone, content rewritten to account for content theft, etc. A 4 year old site that represents so much hard work and good content.


 11:11 pm on Apr 25, 2005 (gmt 0)

Hi Claus


I have now implemented NOINDEX,FOLLOW and just a title.
I put in the body website has moved to its new address at allyoursite.co.uk
Is this what you emant or did you mean also add the .co.uk somehow to the meta-tags for it to follow, if so can you be more specific as I have never heard that one before.

Thanks again for your help.


 11:12 pm on Apr 25, 2005 (gmt 0)


" Is it okay to use redirects for statistics purposes when the redirect
link goes through your cgi-bin AND you block all robots from links to your
cgi-bin in your robots.txt file? "

Some sites use "statistical purposes" as an excuse to hog PR, and much
worse yet, to steal content credit from rightful sites with 302 redirects.

I'm not casting asparagus here, but disallowing the SEs to hide
redirects doesn't make the picture any prettier.

I have a "disallow cgi-bin" statement in my robots'txt file. Why?
It just looked nice, I never wrote a byte into the CGI directory.

I'm going to remove that immediately. I don't want the slightest
indiction that I am doing anything black-hat. -Larry


 11:14 pm on Apr 25, 2005 (gmt 0)

I think your site is in the beginning of a reborn situation, because your 302s are gone, now google focuses on your site to create new PR and spidering.

Let me say it so if I got a total spidering I would be happy, I know this is hard times, because there is not much help out there or folks think you are a spammer, but dont give up, keep a eye on your site:search for other domains or better look at inurl:search because google is now manupulating the site:search, so look in inurl for sites with your title and description and dont panic about all those other domains there thats normal, they just should not have your title and description, but you know that.

If you keep those hijackers/googlebug 302 out of the serps I think you will have good changes, but if nothing happens after that for lets say3-4 month, then its time copy your site to another domain and IP.

You could also be lucky that MSN has taken over by then, but I dont think so, first 2006, but thats a long time to wait.


 11:18 pm on Apr 25, 2005 (gmt 0)

On further investigation and considering what you are saying we were never using a meta refresh...we are 1) using an asp response.redirect and 2) we don't have access to IIS or the 301 file as it is a shared server.
Sorry for the confusion, does this change anything now?


 11:26 pm on Apr 25, 2005 (gmt 0)


There is nothing wrong with having a disallow for cgi-bin.

I in fact do that and the reason is I don't want anything from cgi-bin to show up in the SE's.

This is in fact a smart thing to do.
Why give hackers or other clowns out there more info than they already have?



 11:32 pm on Apr 25, 2005 (gmt 0)

Vince: Thanks for the advice.
Now I will look and see just what IS in my cgi-bin, I have no idea.
As long as there's nothing private, I'm gonna leave robots.txt simply reading

User-agent: *
Disallow: /airheads

(There are loads of airheads infesting my arcane field) - Larry


 12:12 am on Apr 26, 2005 (gmt 0)

My thoughts are that if there is something a website is doing to hurt another Google should be able to identify it.

For what is is worth and if it helps I will tell my experience.

I believe that Google has made a change to target Spam. I am not 100% sure but am 80% sure this change took place around January.

I also think they made mistakes with this change.
In my case an image file which was used as a background set in a CSS file was set wrong.
I sometimes use copy paste and it can get you into trouble if you don't remember to make the changes needed.

In this case the image was being read from another website I run and Google it seems flagged it as spam.
I forgot to alter the URL after pasting it.

It took me a few weeks to find this mistake.
It's not easy to locate some mistake you made that you may have done 2 months ago.

So under the new system you can't have a background image that is located on another website else it will trigger off a penalty and your PR goes out the window.

I don't know what other items Google is looking at as Spam but I am sure others are getting hit the same as I was.
I really can't see how a background image would ever be considered spam but it seems Google does consider it so.


This 467 message thread spans 16 pages: < < 467 ( 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 > >
Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved