homepage Welcome to WebmasterWorld Guest from 54.226.180.223
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 126 message thread spans 5 pages: 126 ( [1] 2 3 4 5 > >     
Further Google 302 Redirect Problems
walkman




msg:723945
 2:05 pm on May 9, 2005 (gmt 0)

(Continued from Google's 302 Redirect Problem [webmasterworld.com])


Google victim of redirect too ;):
Search for "Google" and https://desktop.google.com/ shows first. If you click, https://desktop.google.com/ redirects to Google.com

[edited by: ciml at 4:35 pm (utc) on May 9, 2005]

 

Dayo_UK




msg:723946
 2:08 pm on May 9, 2005 (gmt 0)

Bet that will be fixed in a matter of hours/days.

ciml




msg:723947
 2:23 pm on May 9, 2005 (gmt 0)

Thanks walkman, I'm glad to have seen that while it's still happening.

www.google.com is second for that search, not buried as is normally the case. So, can we compare this to a 302 'hijack' situation? Perhaps we can...

* https://desktop.google.com redirects to [google.com...]

* https://desktop.google.com come top for a search of "google", which we would not expect considering its links.

* cache:https://desktop.google.com [google.com] returns the page that should be returned for cache:http://www.google.com [google.com]

* [google.com...] has the ODP title and description - as can be case with pages that have been put back after a 302 hijack

* cache:http://www.google.com [google.com] shows a crawl date of ten days ago - much older than we would expect as can be case with pages that re-enter after a 302 hijack

https://desktop.google.com is not well linked, and [google.com...] is well linked.

Could www.google.com have been down briefly when Googlebot tried to fetch it? If so, this would help explain why a page that redirects to it is crawled.

As GoogleGuy pointed out here, many people with 'hijack'-like symptoms had some other problem as the root cause.

zeus




msg:723948
 4:34 pm on May 9, 2005 (gmt 0)

Im not sure, but I do think people who has been hijacked or hit by the googlebug(302)for over 6 month ago and was succesfull in removing those, so they have a clean site: before google removed those the "fake way" maybe should consider bying a new domain name and redirect the old domain to the new, plus removing all content to new site.

I my situation I think I will wait until the new update, if nothing has changed, then its time to by a new domain, its said it has to be that way.

walkman




msg:723949
 5:17 pm on May 9, 2005 (gmt 0)

"Bet that will be fixed in a matter of hours/days."

Nope! They have to submit an e-mail here [google.com...] and wait for the "Everything is normal /there's nothing wrong" response :).

Google probably got penalized. Most of their links are with the same anchor.

zeus




msg:723950
 6:07 pm on May 9, 2005 (gmt 0)

Walkman and they link to there other sites.

I still dont get why some are still out of the serps even if they have removed all googles junk and a month ago they made a supplemental DB update, where I saw all supplemental results where gone also on other sites, but a few weeks later that was returned.

All this trouble started for real when they added all those "sites" to there serps, then we saw alot of URL only (which before only came for a short time then the page was respidered with description), 302 links listed as pages, other domains in a site:yourdomain.com search (that problem is now gone because google just hide those now), %20%domain%20.com listed in google (why), I could keep on, but serious what happens here, why is google making such a mess out of there rankings/serps, we also got all those google sponsored scraper sites.

joeduck




msg:723951
 8:28 pm on May 9, 2005 (gmt 0)

"Google victim of redirect too"

Ummm - Could this result simply indicate that they now place a much more emphasis on "new links"? The desktop is a great product and probably has a huge number of incoming fresh links from high PR sites.

[edited by: joeduck at 8:37 pm (utc) on May 9, 2005]

walkman




msg:723952
 8:34 pm on May 9, 2005 (gmt 0)

it's the https link and I doubt many link to the secure URL. Still, it's safe to say that Google still get more new links; it's always in the news and talked about on boards.

Plus, this is a redirect link to Google.com.


"Ummm - Could this result may simply indicate that they now place a much more emphasis on "new links"? The desktop is a great product and probably has a huge number of incoming fresh links from high PR sites."

joeduck




msg:723953
 8:39 pm on May 9, 2005 (gmt 0)

"it's a redirect link to google.com"

Yes - I see now. A normal result should go to the desktop download page.

Physician Google! Heal thyself!

walkman




msg:723954
 8:42 pm on May 9, 2005 (gmt 0)

"Physician Google! Heal thyself"

how about finding the cure for everyone? If it happened to google with a trillion backlinks, what can happen to a site with 40 backlinks?

Now we can't even find who is linking to us with 302 because Google hides it.

dazzlindonna




msg:723955
 8:46 pm on May 9, 2005 (gmt 0)

So, obviously the "fix" to the redirect problem didn't work, since not only has everyone here not had their rankings restored, but now Google has been bitten by its own redirect bug. And GG mentioned something about the fact that it was only sites that had some previous problem (dropping PR, penalty, etc) that could be affected by the redirect bug, right? So, then, is Google's PR dropping, or do they have a penalty of some sort? How else could they hijack themselves, eh? Food for thought...

zeus




msg:723956
 11:32 pm on May 9, 2005 (gmt 0)

Im not sure if this is good news, but for about 6 month my site has only had about 200 pages indexed and googlebot was just a rumor, here today on /216.239.57.104 I saw 920 pages index, but only maybe 200 shown when I push yes to omitted results, but that has also been a problem at google, the omitted results just come to quick, I had once a search on google wich gave me 3mill. results, but when I came to page 4, the omitted results came, so I went to wisenut.

claus




msg:723957
 11:44 pm on May 9, 2005 (gmt 0)

I'll use kind words here, as Google has gotten a lot of flak lately:

AFAIK, a fix was never implemented. I do not consider hiding the wrong URLs in the "site:" search to be a fix, and I don't think it's a good thing to do either. FWIW, new hijacks still occur, old hijacks are not restored to normal, nothing has been "repaired", and Google is by-and-large still unable to tell the difference between an URL and a page1.

I am extremely puzzled about what it might be that makes a solution that hard to implement. I don't even mean "the solution", just "any solution", as several have been suggested. And, the many highly educated people at Google can probably think of a few things as well. They've had a few years to come up with something already.

By now, I can only interpret this as lack of will to get it solved. I don't mean to be rude, only that this is the only logic explanation I can think of. Not only that, it does actually seem like a strong will to let it remain unsolved at all costs - it's that strange to me that nothing gets done here.

---
1) To base choice of URL on PR is not a solution, it is a problem. There is no choice, as there can be only one right URL regardless of PR. As in: "I will not give you the keys to my house even though all your friends say that it is your house"

Jane_Doe




msg:723958
 12:22 am on May 10, 2005 (gmt 0)

To base choice of URL on PR is not a solution, it is a problem

I've never really understood the logic to it either but I just assume they have their reasons for handling things that way. In any event, I have found it helpful to try to analyze what programming logic they must be using to decide which site they pick as the winner and then modifying the losing page (or sometime the whole site needs changes) to try to outrank the redirect page.

steveb




msg:723959
 3:01 am on May 10, 2005 (gmt 0)

"www.google.com is second for that search"

Not for me right now, not even with 100 results enabled. google.com/search is second, not google.com

I don't think it is a lack of will. It is a lack of ability to fix the problem. Some of Google's issues are so obviously of no value -- Supplemental Results showing for pages that haven't existed for more than a year, even though the 404 pages have been seen as 404 dozens of times -- that it must be assumed serious problems exist that they don't know how to fix. This may not be one, it may be a question of will, but I believe they know they have problems they wish they could fix.

steveb




msg:723960
 3:02 am on May 10, 2005 (gmt 0)

google.com/search itself is also a redirect to google.com/webhp

walkman




msg:723961
 3:43 am on May 10, 2005 (gmt 0)

Now they fixed it...but not before I took a screen print :)

> "I don't think it is a lack of will. It is a lack > of ability to fix the problem"

I agree with this sentiment. If they "fixed" this, something else probably goes wrong in the way they have the algo setup. We have to buy existing domains and try to cheat to survive.

Jane_Doe




msg:723962
 4:06 am on May 10, 2005 (gmt 0)

I don't mean to be rude, only that this is the only logic explanation I can think of.

If you consider the possibility that many pages are losing out to redirects because the sites they belong to have penalties, then there really is no significant problem, from Google's point of view anyway, for them to fix. This would logically explain why they aren't fixing the problem. Perhaps from their point of view they are merely downgrading sites that deserve to be downgraded and their current algo and and quality standards for this year are working as planned.

walkman




msg:723963
 4:17 am on May 10, 2005 (gmt 0)

"If you consider the possibility that many pages are losing out to redirects because the sites they belong to have penalties"

I'm sorry but, what penalty does Google.com have?

Jane_Doe




msg:723964
 4:43 am on May 10, 2005 (gmt 0)

I'm sorry but, what penalty does Google.com have?

I didn't say Google.com had a penalty and I didn't say penalties were the only reason redirect issues may occur.

ncgimaker




msg:723965
 5:51 am on May 10, 2005 (gmt 0)

They do still seem to have a ranking problem.

Repeating this search, which in April didn't even show a Google page in the results:
Google Googol mathematical followed zeros coined Milton Sirotta

Now I get 338 results with a Google page at about position 50. As before the results above are on topic, but, shallow.

Message boards, blogs, I get a foreign page with the only English part the sentence about Google.
(I guess thats caused by my regional weighting).
An astronomy site with a clone of Wikipedia. A site about buying link exchanges with a page on Google, a site on their stock launch ....answers.com, Wikipedia page itself... a slashdot article.. another wikipedia clone, Arabic language blog, law site, edu site, German language page, SEO site, blog, air conditioner site,.....
eventually we get down to the one site which has good deep results for : Google itself!

I wouldn't mind if the pages above were full of information about "Googol" or "Milton Sirotta", but blogs, even foreign language ones?! Wikipedia clones that rank above Wikipedia?! Minor filler pages on SEO'd sites outrank Googles own page?

All very weird, the hand tweaked 2 & 3 word searches are fine but the lengthy searches are all very shallow results.

larryhatch




msg:723966
 7:38 am on May 10, 2005 (gmt 0)

The biggest 302-jacker in my niche has a new trick, maybe because some of his 302s got nuked.

He put up a new page called (tadaa!) New Pages.
Ostensibly, this is just a list of 'new links' that he has found.
Its really just the same 1800 or so pages he 302-jacked previously, maybe a few new ones.

I don't know if it works or not, but here's his method:

Instead of full jacked URLs like [jacker.php...] he has
relative links like /sites/site#123.
A simple BASE=jacker statment or the like will complete each URL to the actual
jacking page, which is just a 302 redirect header.

This way, people can't nuke them like the old ones.

Stolen credit for content, PR and what-have-you would accrue to the 'new-pages' page
(assuming this works at all) and from there via regular links to his main pages.

I have to commend the jacker for his tenacity and creativity. There's just one downside
for jacker: Now all the pages he's been jacking are on one URL for easy reference.

Feel free to sticky me for that. - Larry

ncgimaker




msg:723967
 9:59 am on May 10, 2005 (gmt 0)

Just for completeness I've repeated that search on other search engines:

Kanoodle 3
Teoma 1
search.msn.com 1
Yahoo/Altavista/Alltheweb 4
Wisenut max 7 words (Google Googol mathematical Milton Sirotta brings it up at 1)
Gigablast 2
bbc.co.uk 3
ncgimaker 1 (my opinion)

They don't always give the same Google page, its either a press release or similar corporate information page.

cornwall




msg:723968
 10:27 am on May 10, 2005 (gmt 0)

By now, I can only interpret this as lack of will to get it solved. I don't mean to be rude, only that this is the only logic explanation I can think of. Not only that, it does actually seem like a strong will to let it remain unsolved at all costs - it's that strange to me that nothing gets done here.

I am with claus on this one. To start with it looked as if Google has fixed the 302 problem, then it slowly dawns on us that it is still there but buried/hidden, in as much as it is well neigh impossible to see if someone has put the hex on you or not.

No doubt some engineer, or even PR spin doctor, thought that was a great wheeze. Come on Google do something about it, or at least restore our means of finding the b*****s

claus




msg:723969
 10:36 am on May 10, 2005 (gmt 0)

>> lack of will
>> lack of ability

I personally don't think all the intelligent people at Google can't come up with a fix to a simple problem like this. I have thought about it, but i have also ruled it out as that would make those people utterly incompetent, which is not my general impression. A very simple solution was implemented by Yahoo! some time ago, that wasn't impossible at all.

It may well be that it is not easy, but it is certainly possible. My reasoning goes like this: If you've got a problem that can be solved, and you don't do something that can be done to solve it, it must be because you don't wish to solve it.

And that's what puzzles me, because there's no apparent reason that these errors and malfunctions should be desirable.

>> If you consider the possibility that many pages are losing out to redirects
>> because the sites they belong to have penalties

You can also choose to view this statement that GoogleGuy made in another way: That the 302 hijacks cause a "pollution" of the target domain, which is why they get "penalties" in the first place.

So, they are getting penalties because of the 302 redirects, not the other way round. This also explains why sites don't re-surface after hijacks have been removed.

>> q=google

I also saw the listing as originally reported yesterday. It has been fixed now.

theBear




msg:723970
 3:13 pm on May 10, 2005 (gmt 0)

claus says:


So, they are getting penalties because of the 302 redirects, not the other way round. This also explains why sites don't re-surface after hijacks have been removed.

Like maybe a duplicate content issue?

Now if Google has exactly one way of looking to see if a site is really what it is, then fixing the site: view should also start the timers on removing the duplicate content issue.

How long is a page "filtered" for duplicate content issues?.

Is by number of times (how do they count how many times), length of time, number of copies of, or some involved combination thereof?

Jane_Doe




msg:723971
 3:28 pm on May 10, 2005 (gmt 0)

So, they are getting penalties because of the 302 redirects, not the other way round. This also explains why sites don't re-surface after hijacks have been removed.

When sites don't reappear after the 302s have been removed, then it would be logical to consider the possibility that the 302s were not the root cause of the problem in the first place.

walkman




msg:723972
 3:32 pm on May 10, 2005 (gmt 0)

"don't reappear"...is a matter of when, not if they re-appear. If my site is not reapearing, it maybe because Google slapped a duplicate penalty, that probably kept doubling each time they found the "dupe" /302 link. How do we know?

"When sites don't reappear after the 302s have been removed, then it would be logical to consider the possibility that the 302s were not the root cause of the problem in the first place. "

theBear




msg:723973
 3:37 pm on May 10, 2005 (gmt 0)

Jane_Doe sayeth:


When sites don't reappear after the 302s have been removed, then it would be logical to consider the possibility that the 302s were not the root cause of the problem in the first place.

Assumpions when dealing with large systems are usually wrong. Note the use of the weasel word "usually".

What may appear to you as the 302 having been removed may not in fact be the case.

or in the alternative:

Once a page has been declared a duplicate it will take time for that designation to wear off.

If the clocks just started and the declaration was made 6 months ago you could be in for a long road back.

YMMV and logic has no bearing on the matter if you can't see the actual code that is functioning, everything is an assumption.

Atticus




msg:723974
 5:04 pm on May 10, 2005 (gmt 0)

It's just a theory, but...

I've been waiting for years for 'corporate media' to take a stab at taking over the web and bouncing all of us small time publishers.

So suppose that the greatest and most respected SE, one that seeks to do 'no evil,' was bought out by corporate big media interests.

Furthur suppose that this SE gets more and more filled with spam; spam which this magnificent SE can't seem to stop. In a final effort to 'clean' the index and 'take back' the web from 'spammers' the SE introduces a new criteria called Trust Rank. In order to compete in the world of TR, a web site will need to be interlinked with big time 'trustworthy' corporate controlled sites.

Result: small publishers disappear forever, the searching public thanks the SE for fixing the 'spam' problem and the corporate take over of internet eyeballs is complete...

This 126 message thread spans 5 pages: 126 ( [1] 2 3 4 5 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved