ZydoSEO - 4:56 pm on Feb 26, 2013 (gmt 0)
Personnally, if at all possible... I always chose 301 redirects over the canonical link element to correct canonicalization issues. I ONLY use the canonical link element if the redirects are far too complex or impossible to implement. There is one distinct advantage...
While the canonical link element may solve the duplicate content and split PageRank/link juice issues surrounding non-canonical URLs, it perpetuates other people linking to your site with non-canonical URLs. By this I mean, most people create links by browsing to the page to which they want to link and copying the URL of that page from their browser address bar. They paste that URL into their link and their done. If you're using the canonical link element, people will continue to see non-canonical URLs in their browsers and will continue to copy and paste those non-canonical URLs when creating links. Each time this happens in the future, you're likely only getting 85-90% of the PageRank/link juice that you could have gotten had they linked with the canonical URL (assumes Google would also cause canonical link elements to decay or dissipate PageRank just like a link which I think is good assumption).
This is NOT true when canonicalization issues are corrected with a 301 redirect. Not only does it do all the things a canonical link element does, but it almost 100% insures that all future links created pointing to that page(unless the URLs are manually typed in) will be using the canonical form of that URL. If someone wants to link to your URL and they navigate to that page by clicking on a non-canonical link on your site or another site then the browser will detect the 301, request the canonical URL, and change the URL in the browser to the canonical. When the user copies the URL from their browser to create a link, it will now ALWAYS be the canonical URL. This will maximize the amount of juice all future links pass your site.
As far as the Cutts video...
I don't see why this is such big news to so many webmasters and SEOs honestly. A few years back when Cutts announced that 301s did cause a slight loss of PageRank being passed, I knew then the amount lost was likely exactly equal to the amount loss by a link. Blame it on "d"! ;)
If you've studied the original PageRank algorithm in the "The Anatomy of a Large-Scale Hypertextual Web Search Engine", the formula is based on a "random surfer" model (yes I know it's changed a lot since then but the basic concept is probaby the same). And the formula has built into it a damping factor ("d") that represents the probability that the random surfer will get bored and navigate directly to another random page rather than click on a link on the current page in their browser. This damping factor is typically set around 85% (or .85) according to the original docs. And I've always heard Googlers use figures in the 85-90% range. It is this damping factor that causes a link's PageRank/link juice to decay or dissipate roughly 10-15%. It is this same damping factor that causes that exact same amount (as Matt said in the video) of PageRank to be lost or to decay/dissipate when you 301 redirect a URL.
Think about it... If browsers didn't automatically follow redirects then in the random surfer model, a redirected page would be synonymous with a page that used to have mulitple inbound and outbound links which has had all of it's outbound links replaced with a single outbound link pointing to the target URL of the 301 redirect. And assuming d=.85 in this scenario (since there is only 1 outbound link on the page) roughly 85% all available PageRank could be passed out on that single link to the target URL of the redirect.
I think what Cutts is saying is that:
PageA with inbound links and 1 outbound link to Page B
passes EXACTLY the same amount of PageRank link juice as when that same:
PageA with inbound links is 301 redirected to Page B
The decay or dissipation in PageRank is the same.
Why would Google overcomplicate an already very complex PageRank algorithm with an exception to cover 301 redirects when it can simply be treated as a special case (a page where the outbound link count = 1) that is already covered by the general algorithm? They likely wouldn't create exception code because there would be no need and they would want to keep the algorithm/formula as clean and elegant as possible.
Just my $0.02.