homepage Welcome to WebmasterWorld Guest from 50.17.162.174
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe to WebmasterWorld
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 59 message thread spans 2 pages: 59 ( [1] 2 > >     
301 redirects - handle with care or be penalised
Whitey




msg:3714057
 12:31 am on Aug 3, 2008 (gmt 0)

Over [webmasterworld.com...] I mentioned that I couldn't find a definitive thread on 301's.

Tedster responded with :

When the general webmaster/SEO community started to learn about 301 redirects, some went quite wild, throwing 301s around like confetti - and then getting smacked down hard. It was like a new toy on the market and it became "all the rage."

The potential for 301 abuse is well beyond that offered by link manipulation - and so Google really gives 301 redirects a trust check-up. I'm sure that this is one of the reasons that changing to a new domain can be so difficult.

The webmaster knows when they are placing a 301 (or a chain of redirects) only because of trying to manipulate rankings - and when they are using it in an informative, intended fashion. Too much 301 action, especially placing them and then switching them around, or chaining them in with other kinds of redirection, can definitely cast a pall over a domain... or a network of domains.

Maybe some of these historical 301 penalties are part of those old penalties that are now being forgiven - I can't say for sure. But I can say that the 301 redirect is a kind of power tool and it should be used only as the instruction manual intends - essentially, pointing to a new location for previously published material.

And given that "cool urls don't change" I personally recommend limiting use of the 301 redirects. There are times it is exactly the right tool, but many times its use has become very casual and abusive.

So here are the effects of mishandled 301 redirects - any more?

and what are the remedies if Google applies a penalty?

[edited by: encyclo at 4:03 pm (utc) on Aug. 13, 2008]
[edit reason] fixed link [/edit]

 

tedster




msg:3714073
 2:12 am on Aug 3, 2008 (gmt 0)

Penalties for significant 301 abuse across several domains can be quite severe, including long-time or even "permanent" loss of trust. I'm talking a burnt domains, as in done, kaput, with no recourse.

An honest slip-up or technical confusion that accidentally fits a spammer footprint can be remedied, eventually, through a clean-up and re-inclusion request.

When dealing with redirects, I like to remember the old carpenter's motto "measure twice, cut once".

Whitey




msg:3714168
 9:29 am on Aug 3, 2008 (gmt 0)

Interesting, i hadn't realised the impact of the redirects insofar as penalties were concerned until the recent comments. I should have been attentive to it, because we saw sites behaving differently in various circumstances.

e.g.

Site A [ under penalty filter ] is redirected to Site B . Site B goes into a penalty situation. Non trust appears to have been transferred.

Site C [ trusted ] redirects to a new domain ranking inside 4 weeks. Trust transferred.

Site A [ under penalty filter ] redirects to new site E. Site E goes into sandbox for 1 year. Non trust transferrd to new domain.

Site D has had frequent redirects applied to it , say every 9-12 months. The site has never come out of a penalty filter situation since 2005.

...also some comments with regards to redirects , 404's and potential problems over here [webmasterworld.com].Is it better to apply 404's to old pages and create new ones on other domains when redirects have already been applied?

[edited by: Whitey at 9:47 am (utc) on Aug. 3, 2008]

tedster




msg:3714271
 2:18 pm on Aug 3, 2008 (gmt 0)

Is it better to apply 404's to old pages and create new ones on other domains when redirects have already been applied?

I can't speak to that situation from any direct experience, but I'd say in a long standing penalty transfer situation it would be good to break the connections to the previous domain. However, since 301s were already in place - it's still a crapshoot how Google would respond. It would be a sign of good faith, I'd say, to abandon attempts to hold on to any link juice the previous domain tried to transfer and let the new domain stand on its own.

[edited by: tedster at 4:23 pm (utc) on Aug. 3, 2008]

g1smd




msg:3714312
 4:21 pm on Aug 3, 2008 (gmt 0)

Google is well aware of various types of abuse, and has slapped several well known people for such abuse in recent months.

I'm not mentioning those tricks here, except for the one that Matt Cutts blogged about. That was all to do with buying a domain name and existng site, and then redirecting it to another site, purely to try to enhance the PageRank of the site being redirected to.

Whitey




msg:3714461
 1:11 am on Aug 4, 2008 (gmt 0)

I just picked this up from Brett over at [webmasterworld.com...]

> re-included domain never regained its former positions <

I still think, the best thing you can do with a penalized domain is 301 it to a new domain and abandon it for about a year. G completely drops the ban and the "dampening" factors after the ban is gone. You finally get to keep the pr via the new domain. If you don't, I have to agree with Glen that the penalty lives forever

worth a thought ?

bwnbwn




msg:3714468
 1:27 am on Aug 4, 2008 (gmt 0)

What about Buying domians in your vertical that have links doing a 301 to get traffic from them?

M_Bison




msg:3714692
 11:40 am on Aug 4, 2008 (gmt 0)

So in other words, I can 301 a penalized domain to a competitor and the competitor gets penalized?

Whitey




msg:3715553
 11:19 am on Aug 5, 2008 (gmt 0)

How do internal redirects to new URL structures or sub domains fit into all of this? Are they a risk ?

tedster




msg:3715583
 12:02 pm on Aug 5, 2008 (gmt 0)

Internal redirects are less touchy - but still you can make a mess with chains of redirects. When completely changing the url structure of a site (to be avoided as much as possible, by the way) I still prefer only to redirect key url and let Google sort out the 404s and the new site structure essentially through normal spidering.

ichthyous




msg:3715775
 3:35 pm on Aug 5, 2008 (gmt 0)

This is interesting...you're damned if you do and damned if you don't. I have been recently trying to rememdy duplicate title/descriptions reported in WMT by blocking some of the urls in robot.txt. I won't go into the technicalities but the crux of the issue is that my dynamic site produces two sets of urls pointing to the same page when people hop over to the online store. I figured the actual store's url was more important so I nixed the dupe urls that seemed to serve no purpose.

Apparently those 'transitional urls' were worth something as blocking them has led to a fairly steep drop in traffic. Now I have had to go back and pull the blocked urls from robots.txt. The only other way to remedy this that I can see is to use a 301 redirect on all the store urls (oy vey!)...the first set of urls will never be removed though, they will stay in place. Is internal redirecting like that that risky?

webdude




msg:3715833
 4:34 pm on Aug 5, 2008 (gmt 0)

Another thought...

In a dynamic site situation, when changing the link structure, if you have a filename that was changed and you pass on variables or query strings during redirects, you are in affect passing multiple 301s and 302s. I think this may be a problem, especially if you have a forum or products with thousands of dynamically generated pages. Or am I wrong in with this assumption?

$S$Q (IIS)
? (Apache)

neo schmeichel




msg:3716005
 7:42 pm on Aug 5, 2008 (gmt 0)

webdude,

Even when you're passing the query string, it should only be a single redirect in Apache. I'm not sure how IIS/ISAPI_Rewrite handles these cases, but I don't believe it's very different.

As for the larger issue, I wonder if there's a certain footprint that Google is looking for when applying these penalties. E.g. are they intentionally including/excluding sites from this penalty that are doing things like linkbaiting to a microsite, and 301ing that site to their main site after a certain point in time? Is there a certain number of 301ed domains (or certain aggregate PageRank) that you need to hit before you get penalized? Are they still penalizing you if the content on the old domain/URL is the same as the content on the new domain/URL?

The mass 301 is a legitimate (if buggy) tool when trying to consolidate old, poorly architected sites (yes, some single sites do have multiple domains for no good reason at all) into a modern CMS. This kind of penalty, while I see what they're trying to prevent, really makes one pause before putting effort into major development projects that have the potential to really help users and search engines alike.

ichthyous




msg:3716018
 7:57 pm on Aug 5, 2008 (gmt 0)

I opened up my htaccess file today and noticed that there were still about 100 or so 301 redirects in place from two years ago. At that time I split some content into two completely unrelated sites on different domains and redirected some of the old pages to the new domain. After a year I removed the redirects to the new domain, but noticed that googlebot was still looking for those pages...so I added them back. I wonder if these redirects to another domain for pages which are thematically unrelated to my main domain might be causing a penalty of some sort. I removed them all today just in case.

Whitey




msg:3716106
 10:09 pm on Aug 5, 2008 (gmt 0)

and pull the blocked urls from robots.txt

Ouch .... you're looking at 6 months of non SERP's for these pages at least. Google doesn't unblock robots.txt for this time regardless of your reversals , unless somethings changed since when we did this.

ichthyous




msg:3716254
 3:09 am on Aug 6, 2008 (gmt 0)

Oh I see...so if I 301 redirect the formerly blocked urls to the correct urls it won't have any effect since they have been dropped from the index for 6 months? Blocking the urls in robots.txt has definitely helped to clean up my dupe title/description issues but I have seen to improvement in traffic for having fewer dupes that's for sure

Whitey




msg:3716269
 3:32 am on Aug 6, 2008 (gmt 0)

Just clarifying - to the best of my knowledge and experience [ some time ago ] if you apply robots.txt to a URL Google will not consider it for reindexing for around 180 days, even if you subsequently try to reverse it out.

There's some discussion relating to this and 301's over here : [webmasterworld.com...]

Maybe someone else has another view.

[edited by: Whitey at 3:34 am (utc) on Aug. 6, 2008]

idolw




msg:3716328
 7:12 am on Aug 6, 2008 (gmt 0)

we have blocked all robots to entire sites several times in the past just because we were not cautious enough.
Results always became URL-only for a couple of days and changed to regular listing after a week or so.
No changes in SERPs were observed during these situations. Prefer not to try it again though ;)

ichthyous




msg:3716748
 4:19 pm on Aug 6, 2008 (gmt 0)

What effect, if any, would 301 redirecting these blocked urls to the correct urls have? If urls are blocked in robots.txt does google still take the redirect into consideration and index the redirected-to page? I am looking at close to 1K 301 redirects since each url is unique. In light of google's announcement on redirects is it just a bad idea to add that many redirects, even if they are internal?

rogerd




msg:3716927
 7:20 pm on Aug 6, 2008 (gmt 0)

In the last 12 months, I've moved a block of content to a new domain from a long-established domain on at least two occasions. In each case, the content was transferred mostly intact, with some wording changes within the pages. Both of the new domains were newly registered. In both cases, the new site started ranking well (but not quite as well as the old site) for the same terms. In both cases, these rankings declined somewhat in the ensuing weeks. In one case, though, they stabilized, while in the other case the new site really fell from view.

While there wasn't anything dicey about either site, I assume some non-obvious differences in the two situations caused the domain that tanked to look a bit more doubtful to Google than the other one. I don't think any penalty was involved, but rather whatever credibility/link power might have been transferred to the new domain with the 301s wasn't enough to overcome the lack of aging, lack of indendent links, etc.

g1smd




msg:3716983
 7:51 pm on Aug 6, 2008 (gmt 0)

*** if you apply robots.txt to a URL Google will not consider it for reindexing for around 180 days ***

I have seen Google indexing URLs within hours of removing an associated Googlebot robotsitxt disallow rule.

It's buried in this old topic... [webmasterworld.com...]

.

*** If urls are blocked in robots.txt does google still take the redirect into consideration and index the redirected-to page? ***

Anything placed behind the robots.txt disallow rule will not be accessed and its status will not be ascertained.

It can still appear in SERPs as a URL-only entry if something still links to it.

ichthyous




msg:3717020
 8:51 pm on Aug 6, 2008 (gmt 0)

Interesting, thanks for the input. What i am trying to ascertain is whether the boost from totally cleaning up my dupe titles and descriptions by blocking certain urls via robots.txt will, over time, bring a net gain in traffic.

It's my understanding that a 301 is used to report a permanent move of content from one url to another...in my case all the old urls will remain embedded in links on my pages and aren't going anywhere. While I have lost traffic up front by blocking those urls I might also get a boost over time for being dupe free. Or is that just wishful thinking?

tedster




msg:3717053
 9:21 pm on Aug 6, 2008 (gmt 0)

Yes, your approach may be clarifying for Google which URLs really "matter" and that clarification can help. But a lot of the outcome will depend on an extremely thorough attention to detail.

Whitey




msg:3717083
 10:01 pm on Aug 6, 2008 (gmt 0)

This was what I was looking for : [scholar.google.com...]

Google will continue to exclude your site or directories from successive crawls if the robots.txt file exists in the web server root. If you do not have access to the root level of your server, you may place a robots.txt file at the same level as the files you want to remove. Doing this and submitting via the automatic URL removal system will cause a temporary, 180 day removal of your site from the Google index, regardless of whether you remove the robots.txt file after processing your request. (Keeping the robots.txt file at the same level would require you to return to the URL removal system every 180 days to reissue the removal.)

g1smd




msg:3717095
 10:27 pm on Aug 6, 2008 (gmt 0)

Sure. Using the Removal Tools drops the URL for 180 days.

That's another step beyond simple creation of a robots.txt file.

tedster




msg:3717122
 11:10 pm on Aug 6, 2008 (gmt 0)

re: scholar.google.com/remove.html

That's a Google Scholar page and the information there is a bit old. There was a very welcome change when url removal tool was migrated into Webmaster Tools.

To reinclude content
If a request is successful, it appears in the Removed Content tab and [blue[you can reinclude it any time[/blue] simply by removing the robots.txt or robots meta tag block and clicking Reinclude. Otherwise, we'll exclude the content for six months.

[googlewebmastercentral.blogspot.com...]


ichthyous




msg:3717254
 3:11 am on Aug 7, 2008 (gmt 0)

Yes, your approach may be clarifying for Google which URLs really "matter" and that clarification can help. But a lot of the outcome will depend on an extremely thorough attention to detail.

Well, as long as the urls won't be pulled from the index for 180 days simply from blocking them in robots.txt I will leave it in place and work on zapping the remaining dupe content and see if that has any effect on traffic. I made a change to robots.txt to allow other robots to index those pages but not googlebot. I suspect that if google categorizes those urls as dupes they aren't performing well anyway, and the traffic I have lost from those dupe urls was coming in from from yahoo and other sources. If there's no improvement in traffic over time I may remove the blocked urls from robots.txt and just redirect them to the proper url....what a job that will be!

tedster




msg:3717610
 2:39 pm on Aug 7, 2008 (gmt 0)

as long as the urls won't be pulled from the index for 180 days simply from blocking them in robots.txt

If you block a url in robots.txt, it almost always will be dropped from the index - and a lot sooner than 180 days.

pageoneresults




msg:3717621
 2:45 pm on Aug 7, 2008 (gmt 0)

Bummer! I'm late to the party on this one and I dig these redirect topics!

When dealing with redirects, I like to remember the old carpenter's motto "measure twice, cut once".

I have a "new" carpenter's motto when it comes to redirects and such...

"Measure thrice, don't cut. Measure thrice again, don't cut. Measure once more, now make the first mark. Measure once more and cut." :)

A bit overboard but when it comes to the technical underpinnings of a website, one minor mishap can wreak havoc for months down the line, I know from personal experience years ago. Once bitten, twice shy as they say. :)

Whitey




msg:3718068
 10:58 pm on Aug 7, 2008 (gmt 0)

pageoneresults - Any chance you could share the specific nature of your learning mishap for those of us still under eternal apprenticeship .

This 59 message thread spans 2 pages: 59 ( [1] 2 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved