Welcome to WebmasterWorld Guest from 126.96.36.199
Tedster responded with :
When the general webmaster/SEO community started to learn about 301 redirects, some went quite wild, throwing 301s around like confetti - and then getting smacked down hard. It was like a new toy on the market and it became "all the rage."
The potential for 301 abuse is well beyond that offered by link manipulation - and so Google really gives 301 redirects a trust check-up. I'm sure that this is one of the reasons that changing to a new domain can be so difficult.
The webmaster knows when they are placing a 301 (or a chain of redirects) only because of trying to manipulate rankings - and when they are using it in an informative, intended fashion. Too much 301 action, especially placing them and then switching them around, or chaining them in with other kinds of redirection, can definitely cast a pall over a domain... or a network of domains.
Maybe some of these historical 301 penalties are part of those old penalties that are now being forgiven - I can't say for sure. But I can say that the 301 redirect is a kind of power tool and it should be used only as the instruction manual intends - essentially, pointing to a new location for previously published material.
And given that "cool urls don't change" I personally recommend limiting use of the 301 redirects. There are times it is exactly the right tool, but many times its use has become very casual and abusive.
So here are the effects of mishandled 301 redirects - any more?
and what are the remedies if Google applies a penalty?
[edited by: encyclo at 4:03 pm (utc) on Aug. 13, 2008]
[edit reason] fixed link [/edit]
An honest slip-up or technical confusion that accidentally fits a spammer footprint can be remedied, eventually, through a clean-up and re-inclusion request.
When dealing with redirects, I like to remember the old carpenter's motto "measure twice, cut once".
Site A [ under penalty filter ] is redirected to Site B . Site B goes into a penalty situation. Non trust appears to have been transferred.
Site C [ trusted ] redirects to a new domain ranking inside 4 weeks. Trust transferred.
Site A [ under penalty filter ] redirects to new site E. Site E goes into sandbox for 1 year. Non trust transferrd to new domain.
Site D has had frequent redirects applied to it , say every 9-12 months. The site has never come out of a penalty filter situation since 2005.
...also some comments with regards to redirects , 404's and potential problems over here [webmasterworld.com].Is it better to apply 404's to old pages and create new ones on other domains when redirects have already been applied?
[edited by: Whitey at 9:47 am (utc) on Aug. 3, 2008]
Is it better to apply 404's to old pages and create new ones on other domains when redirects have already been applied?
I can't speak to that situation from any direct experience, but I'd say in a long standing penalty transfer situation it would be good to break the connections to the previous domain. However, since 301s were already in place - it's still a crapshoot how Google would respond. It would be a sign of good faith, I'd say, to abandon attempts to hold on to any link juice the previous domain tried to transfer and let the new domain stand on its own.
[edited by: tedster at 4:23 pm (utc) on Aug. 3, 2008]
I'm not mentioning those tricks here, except for the one that Matt Cutts blogged about. That was all to do with buying a domain name and existng site, and then redirecting it to another site, purely to try to enhance the PageRank of the site being redirected to.
> re-included domain never regained its former positions <
I still think, the best thing you can do with a penalized domain is 301 it to a new domain and abandon it for about a year. G completely drops the ban and the "dampening" factors after the ban is gone. You finally get to keep the pr via the new domain. If you don't, I have to agree with Glen that the penalty lives forever
worth a thought ?
Apparently those 'transitional urls' were worth something as blocking them has led to a fairly steep drop in traffic. Now I have had to go back and pull the blocked urls from robots.txt. The only other way to remedy this that I can see is to use a 301 redirect on all the store urls (oy vey!)...the first set of urls will never be removed though, they will stay in place. Is internal redirecting like that that risky?
In a dynamic site situation, when changing the link structure, if you have a filename that was changed and you pass on variables or query strings during redirects, you are in affect passing multiple 301s and 302s. I think this may be a problem, especially if you have a forum or products with thousands of dynamically generated pages. Or am I wrong in with this assumption?
Even when you're passing the query string, it should only be a single redirect in Apache. I'm not sure how IIS/ISAPI_Rewrite handles these cases, but I don't believe it's very different.
As for the larger issue, I wonder if there's a certain footprint that Google is looking for when applying these penalties. E.g. are they intentionally including/excluding sites from this penalty that are doing things like linkbaiting to a microsite, and 301ing that site to their main site after a certain point in time? Is there a certain number of 301ed domains (or certain aggregate PageRank) that you need to hit before you get penalized? Are they still penalizing you if the content on the old domain/URL is the same as the content on the new domain/URL?
The mass 301 is a legitimate (if buggy) tool when trying to consolidate old, poorly architected sites (yes, some single sites do have multiple domains for no good reason at all) into a modern CMS. This kind of penalty, while I see what they're trying to prevent, really makes one pause before putting effort into major development projects that have the potential to really help users and search engines alike.
and pull the blocked urls from robots.txt
Ouch .... you're looking at 6 months of non SERP's for these pages at least. Google doesn't unblock robots.txt for this time regardless of your reversals , unless somethings changed since when we did this.
There's some discussion relating to this and 301's over here : [webmasterworld.com...]
Maybe someone else has another view.
[edited by: Whitey at 3:34 am (utc) on Aug. 6, 2008]
While there wasn't anything dicey about either site, I assume some non-obvious differences in the two situations caused the domain that tanked to look a bit more doubtful to Google than the other one. I don't think any penalty was involved, but rather whatever credibility/link power might have been transferred to the new domain with the 301s wasn't enough to overcome the lack of aging, lack of indendent links, etc.
I have seen Google indexing URLs within hours of removing an associated Googlebot robotsitxt disallow rule.
It's buried in this old topic... [webmasterworld.com...]
*** If urls are blocked in robots.txt does google still take the redirect into consideration and index the redirected-to page? ***
Anything placed behind the robots.txt disallow rule will not be accessed and its status will not be ascertained.
It can still appear in SERPs as a URL-only entry if something still links to it.
It's my understanding that a 301 is used to report a permanent move of content from one url to another...in my case all the old urls will remain embedded in links on my pages and aren't going anywhere. While I have lost traffic up front by blocking those urls I might also get a boost over time for being dupe free. Or is that just wishful thinking?
Google will continue to exclude your site or directories from successive crawls if the robots.txt file exists in the web server root. If you do not have access to the root level of your server, you may place a robots.txt file at the same level as the files you want to remove. Doing this and submitting via the automatic URL removal system will cause a temporary, 180 day removal of your site from the Google index, regardless of whether you remove the robots.txt file after processing your request. (Keeping the robots.txt file at the same level would require you to return to the URL removal system every 180 days to reissue the removal.)
That's a Google Scholar page and the information there is a bit old. There was a very welcome change when url removal tool was migrated into Webmaster Tools.
To reinclude content
If a request is successful, it appears in the Removed Content tab and [blue[you can reinclude it any time[/blue] simply by removing the robots.txt or robots meta tag block and clicking Reinclude. Otherwise, we'll exclude the content for six months.
Yes, your approach may be clarifying for Google which URLs really "matter" and that clarification can help. But a lot of the outcome will depend on an extremely thorough attention to detail.
Well, as long as the urls won't be pulled from the index for 180 days simply from blocking them in robots.txt I will leave it in place and work on zapping the remaining dupe content and see if that has any effect on traffic. I made a change to robots.txt to allow other robots to index those pages but not googlebot. I suspect that if google categorizes those urls as dupes they aren't performing well anyway, and the traffic I have lost from those dupe urls was coming in from from yahoo and other sources. If there's no improvement in traffic over time I may remove the blocked urls from robots.txt and just redirect them to the proper url....what a job that will be!
When dealing with redirects, I like to remember the old carpenter's motto "measure twice, cut once".
I have a "new" carpenter's motto when it comes to redirects and such...
"Measure thrice, don't cut. Measure thrice again, don't cut. Measure once more, now make the first mark. Measure once more and cut." :)
A bit overboard but when it comes to the technical underpinnings of a website, one minor mishap can wreak havoc for months down the line, I know from personal experience years ago. Once bitten, twice shy as they say. :)