---- Why AOL and MSN execs need to be mathematicians
doc_z - 9:31 pm on Apr 1, 2004 (gmt 0)
Herein lies Google's problem. With the number of initial authorities like Yahoo, Looksmart and DMOZ diminishing for them, the likelihood of results being vastly artificially boosted by the popularity of the mere connectedness of the community in which they belong would be very high. Consider for example the large network of teenage gamers out there on the net who have websites and link to each other. Without the appropriate source values, our results could be dominated by the collective opinions of warez kiddies, free porn junkies and Britney Spears fans
I'm not talking about the initial PageRank vector as used in the power method for the conditioned transition matrix. I am talking about the conditioning vector or what Google refers to lately as the "personalization vector" which is used in every iteration.
Google's personalized PageRank vectors is based on the topic related PR vectors. This model doesn't base on authorities, i.e. the weight (the transition probability in the random surfer model) is topic dependent but there is no higher weight to qualitative good pages. Therefore, there is no need for authorities and no dependency from directories. You just have to determine the topic of each web page.
Of course, even in this case there are no recursive definitions, just recursive iteration schemes (the same that can be used for the non-personalized case). And even in this case you can calculate the final PR within one step without iterations.
Even in the TrustRank model with artificial sources initial conditions don't play a role.
Sorry this is blatantly wrong but I'm only prepared to debate it offline as this is not the research forum.
I was talking about the initial vector used for iteration schemes to calculate TrustRank/PR (!) such as Jacobian iteration. In this case (which I was referring to) the statement given above is obviously valid.