Forum Moderators: open
Yes, I realize the whole thing is just a mathematical model and the final results can be calculated in various ways. But my point is that PR arriving at a non-indexed page is lost to the site, and this can be significant. For example if a typical page has say 10 outgoing links to indexed pages, but 5 to non-indexed pages (e.g., terms, privacy, about, contact, copyright) then on each iteration the PR lost to the site is PR * d * 1/3.
1. The original question was about the change for the homepage and not the whole site. Therefore, msg #24 was given the result for the homepage.
2. Even the whole site is not only loosing a PR of
d * PR_Home * y / ( x + y)
(the PR which would be transferred to the noindex, nofollow pages), but
d / (1-d) * PR_Home * y / ( x + y)
which is significant higher. Therefore, the result in your example is PR_Home * d/(1-d)/3. (You have neglected higher order effects, i.e. the PR which is lost due to the fact that these page can't distribute PR.)
3. To say it again: Iterations are related to the scheme which is used to solve the set of linear equations. This is only a technique to speed up the calculation process and doesn't have any meaning. There are several possibilities to solve these equations and the Jacobian algorithm (which is mentioned in the original papers) is just one method. You can even get the exact (final) result in one step without any initial guessing or any iteration. However, this would be computationally much more expensive.
I did not "neglect higher order effects". For the sake of simplicity I excluded them by stating that my formula applied only to "each iteration".
However it doesn't really matter whether my calculation is accurate or not - it was just an example to point out that PR leakage can be significant. The title of this thread is about the need to block page rank leakage, not about how to calculate it.
My question is related to the original although not precisely the same, so perhaps I should start a different thread.
Harry
I don't see how a redirect would be beneficial. Google would still see it as a link.
Well, instead of an external link (a link to another domain/site), it would become an internal link.
Instead of linking to www.yourdomain.com:
<a href="http://www.yourdomain.com">Link</a>
I could use this link to my own domain to get the same result:
<a href="http://www.mydomain.com/redirect.cgi?www.yourdomain.com">Link</a>
This way PR will not leak to www.yourdomain.com.
<a href="http://www.mydomain.com/redirect.cgi?www.yourdomain.com">Link</a>
In Perl, this redirect.cgi script contains only 2 lines:
$url = $ENV{'QUERY_STRING'};
print "Location: $url\n\n";
<a href="http://www.mydomain.com/redirect.cgi?www.yourdomain.com">Link</a>This way PR will not leak to www.yourdomain.com.
well true that in some cases. but that's not how it always works. I have seen PR leak from such links if 1 of the 2 conditions meet. I am surprised no one else has noticed it yet. It's been there for a long time.
<a href="http://www.mydomain.com/redirect.cgi?www.yourdomain.com">Link</a>
This way PR will not leak to www.yourdomain.com.well true that in some cases. but that's not how it always works. I have seen PR leak from such links if 1 of the 2 conditions meet. I am surprised no one else has noticed it yet. It's been there for a long time.
I'm not sure, but I think that with such a method, PR doesn't leak to yourdomain but it does to the file redirect.cgi, wich I don't believe has many links.
Maybe I'm wrong, it's only an interpretation that seems the most logical to me.
Greetings,
Herenvardö
As far as I can see the only way this can be done is if a search engine can be fooled into not recognizing the link as a link, and therefore not including the link in it's PR calculations. But that means you can't use <a href= or 'onclick', etc.
Not only would that be immoral in the Great God Google's eyes, no doubt resulting in a blast of divine lightning, but I also suspect it's impossible. The only thing that can be done is to minimise the number of unhelpful links or minimise their effects by putting them on deep low-PR pages.
I'm not sure, but I think that with such a method, PR doesn't leak to yourdomain but it does to the file redirect.cgi, wich I don't believe has many links. Maybe I'm wrong, it's only an interpretation that seems the most logical to me.
Ofcourse I check the backlinks of the site that is launched by the redirect script to confirm.
But that means you can't use <a href= or 'onclick', etc.Not only would that be immoral in the Great God Google's eyes, no doubt resulting in a blast of divine lightning, but I also suspect it's impossible.
Impossible? Anything is impossible! Maybe we are not able to find a way, but it's not impossible.
What 'bout using a java-applet to put the link in? Is G able to spider bytecodes?
I hope there will pass some time untill G can do that... time enough to search for a new solution ;) That's SEO!
Greetings,
Herenvardö
Am I right in concluding they probably pass PR to a page of their own (www.theirURL.com/links.htm), but will "redirect" on click (ok, it's no official one) to www.MySuckerSite.com?
This is very bizarre and also very misleading for bots. I personally would drop them.
But technically this answers the original question as well as HarryM's.
I have been considering if it is possible to recover some of the PR wastage to non-indexed pages. As an experiment I have set up a new page with meta set to 'noindex', from which hangs a new indexable page. If the page obtains PR then it can only be coming via the 'noindex' page. Just have to wait and see.
Harry
The question is whether PR is nonetheless leaked from the page containing the link, because if it is, where is it being leaked to?
Not everyone will see it. I have my Norton security set to 'high' which automatically bans Java applets. Even with 'medium' where the user is given the option, the recommendation is to ban Java applets which probably 90% of users obey.
Hoping be useful,
Herenvardö
Could this Help with PR drain?
<html>
<head>
<meta HTTP-EQUIV=Refresh CONTENT="0; URL=http://www.abc.com/affiliate123">
</head>
<body>
</body>
</html>
Doesn't matter where they are kept, in the end they are communicated (e.g., via 302), so every bot can see it.
One exception is of course JavaScript.
if (!bot){
?><p><a href="linkURL">link text</a> surrounding text </p><?
}
If you have a variable called bot that is true when a robot is detected, you will be giving that link only to users, not to bots. Greetings,
Herenvardö
>If it's not an issue, then why does my homepage drop more than 20 positions every time I link to an external page from my homepage?
First ask yourself what has PR to do with serp position. My answer is: not a whole heck of a lot. It hasn't had a lot to do with serp position for quite some time.
There are reasons to do the things that get PR, to a point. But linking out causing so called PR leak, causing a page to drop in the serps is IMHO absurd.
Linking out to topically unrelated material causing a page to have its theme watered down, causing a site to drop in the serps for a particular keyword combo makes more sense to me.
dirkz: Doesn't matter where they are kept, in the end they are communicated (e.g., via 302), so every bot can see it.
Please explain. Where is the connection between the link on the web page (eg: "view.php?string" - which is a link to an external php redirect script that contains no URLs) and a specific URL in a list of URLs in a separate text file (which is a file referred to in the php script, but not linked to)?