Welcome to WebmasterWorld Guest from 54.166.224.46

Message Too Old, No Replies

exactly what is Trust Rank?

and is it a part of the Jagger update?

     
1:07 am on Nov 1, 2005 (gmt 0)

WebmasterWorld Senior Member annej is a WebmasterWorld Top Contributor of All Time 10+ Year Member



People are asking for more information on trustrank in the long Jagger thread and the questions are getting lost in all the posts. I thought maybe if the topic has it's own thread it would help get people to discuss this.
7:36 am on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I disagree actually. Look at the serps only 6 months ago, dominated by unthemed link and reciprocals. If trustrank were engaged at that time, those sites dominating the charts with thousands of unthemed links would have bombed.

Theming has nothing to do with TrustRank.

Also, it is senseless to make a decision if TrustRank is applied or not by looking at the SERPs. PageRank, TrustRank or what ever is just one off-page factor. However, there are about 100 factors which influence the SERPs. (There are better methods.)

8:07 am on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



"Trustrank is only one off page factor."

Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.

"Theming has nothing to do with TrustRank."

Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about

9:42 am on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.

TrustRank doesn't include hundreds of varaibles. There are only a few parameters within the algorithm.

Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about

The value of links within the TrustRank algorithm is purely based on the linking structure and has nothing to do with theming (which is an on-page property).

BTW, "authority" within the TrustRank concept means "seeds".

9:58 am on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



the idea of "about 100 factors affect ranking" is pretty antique now. The algo is pretty removed from hardwired factors IMHO.
12:23 pm on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member kaled is a WebmasterWorld Top Contributor of All Time 10+ Year Member



I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology.

With technology of this sort, whilst it might be possible to trade PR for money, the effect of paid links on actual serps should be quite small if everything is working properly (since each paid link would have to be on a relevant page - very tricky).

Essentially, search engines attempt to rank pages according to relevance. This requires contextual analysis. A TrustRank system requires human editors. This contradicts Google philosophy.

Having said that, a Distrust system could be used to devalue links from bad sites.
NOTE: I did not say anything about applying penalties - I am vehemently opposed to penalties in general.

Kaled.

7:21 pm on Nov 2, 2005 (gmt 0)

10+ Year Member



But isnt the same applied to PageRank also, in terms if "seeds"? There must be a dataset seed of all PR10 sites in order for the PR formula to work.

I think that TrustRank is exactly the same as PageRank with the difference that there wont be a TR toolbar where SEO'ers would buy/sell links according to its value.

7:51 pm on Nov 2, 2005 (gmt 0)



it's the same as PageRank, but it counts only when the site givinga link has "trust." Big spammer has PR7 and links to Big Spammer2, he still gives them page rank, but since he has no "trust", no trust is given.
9:03 pm on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



But isnt the same applied to PageRank also, in terms if "seeds"? There must be a dataset seed of all PR10 sites in order for the PR formula to work.

No, in the classic PR algorithm the "seeds" are uniformly distributed, i.e. there is no set of PR10 pages, but PR10 pages are just a consequence of the linking structure.

9:52 pm on Nov 2, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



What happens when a site you "trust" goes off the deep end and abuses the trust through corruption?
9:56 pm on Nov 2, 2005 (gmt 0)



>> What happens when a site you "trust" goes off the deep end and abuses the trust through corruption?

I think the algo can flag them by measuring the % of outgoing "bad" links or something. Then, a reviewer could check them again. I know I read the paper, just don't remember exactly.

Those college students at eval.google I think play that role...

5:26 am on Nov 3, 2005 (gmt 0)

10+ Year Member



Hi All,

The topic sounds to be really good and interesting. But immaging how many keywords are there, lets leave unwanted and lets discuss about the competitive keywords. If would require a huge huge staff and reviews done once are not sufficient as I see atleast a new site every day for highly competitive keyword which would be a spam. I dont think it is possible.

3:02 pm on Nov 3, 2005 (gmt 0)

WebmasterWorld Senior Member annej is a WebmasterWorld Top Contributor of All Time 10+ Year Member



Just to clearly define what we are talking about here is the abstract for the original study completed in 2004. To see this go to
[dbpubs.stanford.edu:8090...] and scroll down for the abstract.

Basically they show that by manually identifying less than 200 reputable seed pages they used the link structure of the web to discover other good pages. They claim they could filter out most of the spam on the web this way. So it seems there would be no hand checked sites beyond the seed pages.

Now if Google is really starting to use trust rank in their search results they may have changed the method somewhat. And we don't even know for sure if they have implemented it. The reason we suspect they might have is that Google has registered a trademark for 'trustramk' and have applied for a patent for it.

[edited by: annej at 3:04 pm (utc) on Nov. 3, 2005]

3:04 pm on Nov 3, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Google marketing the word "Trust" seems a little off-base unless it's a product geared towards the financial industry of something like web certificates.

Imagine how bad the concept of Google "trusting" a website with a big green TrustRank of 9 could be abused!

3:09 pm on Nov 3, 2005 (gmt 0)

WebmasterWorld Senior Member annej is a WebmasterWorld Top Contributor of All Time 10+ Year Member



I would think that the 'trusted' pages would have to be a very well kept secret. So I doubt if we would be seeing a trust rank result on our google bar. I seems to me that making it easy to see the page rank of individual pages caused a lot of problems for Google.

But then what do I know about such things.

9:40 am on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Where does this trust rank theory come from. Is it just something people made up or has G ssaid something.

They made it up.

My basic understanding is that TrustRank is different to PageRank because it takes the age of links into consideration.

New links have little value.

Old links have much value.

New links can be bought in seconds where as old links can't as you can't reverse time!

9:46 am on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Would you "trust" a recommendation from a company that's been in place for 1 month or would you prefer to "trust" a recommendation from a company that had been in place 10 years?

If someone had been recommending a fine widget store for 5 years you can assume it's a decent resource (or the webmaster never updates his links!).

10:22 am on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Remember the original definition of pagerank is:
PageRank can be thought of as a model of user behavior. We assume there is a "random surfer" who is given a web page at random and keeps clicking on links, never hitting "back" but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank
[www-db.stanford.edu...]

Now that is just plain crazy on the web as it exists today.....Imagine how helpful a city travel guide would be if it started "we assume there is a random tourist who visits streets and back alleys at random"

That random tourist would pretty soon be toes up in the morgue.

On the modern web, people only visit where it is safe to visit; a search engine that invites people to run into dark alleys is a dangerous search engine.

Anything Google does to provide a safe map of the web is of advantage to its users.

2:04 pm on Nov 4, 2005 (gmt 0)



>> New links can be bought in seconds where as old links can't as you can't reverse time!

oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.

2:44 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member kaled is a WebmasterWorld Top Contributor of All Time 10+ Year Member



I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.

Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.

Kaled.

2:45 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.

I used the word bought because that's how people were manipulating PageRank.

Buying in link popularity rather than it occurring naturally.

Perhaps it's unlikely someone would pay ££££ for links if they didn't see any benefit from them for at least 6-12 months... most people would just presume they didn't help rankings at all.

2:49 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.
Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.

So if you gave me a brand new domain name and we pointed all the links from an existing site to it, then it would rank pretty much immediately?

Because if age is not a factor then that would be the case.

2:59 pm on Nov 4, 2005 (gmt 0)



>> I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.

that's not hard to do at all. They scan page a and notice and link to page b. They just note that.

3:34 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Who is saying that TrustRank takes the age of links into account?!
TrustRank uses the standard transition matrix.
4:37 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Who is saying that TrustRank takes the age of links into account?!

I am, based on the patent information below.



Information retrieval based on historical data.
A system identifies a document and obtains one or more types of history data associated with the document. The system may generate a score for the document based, at least in part, on the one or more types of history data.
United States Patent Application 20050071741 [appft1.uspto.gov]

5:27 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I thought we’re talking about the TrustRank algorithm...
5:33 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



...whereas I thought people were trying to say there was no algorithm and that the system relied on human intervention.
6:04 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member annej is a WebmasterWorld Top Contributor of All Time 10+ Year Member



It begins with human intervention selecting the seed sites then the algorithm comes in to play.

[dbpubs.stanford.edu:8090...] and scroll down for the abstract

But Google may not be implementing it this way, in fact we aren't certain if they are using it yet.

6:13 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



"I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology."

This seems like a good way to evaluate the relevancy of a link. And it reinforces the notion of getting links from relevant sub-sections of a directory and getting links via articles submissions.(i.e., even if this is a way to improve relevancy, it's already been gamed.)

6:56 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology.

This technology also exists in Adwords. You can now enter in a url and it returns a list of relevant keywords based on the content of the page. It works very well and sometimes it is even clever enough to know what terms are related.

I used to scoff at the the various theories tossed around that discussed ranking algorithms. They seemed like they were WAY too processor intensive to be practical.

However it does appear there is something going on that is able to determine the value of a link and assign it a weighting factor. Perhaps they can establish the theme of a site where the link is coming from or they do look at the surrounding text.

7:50 pm on Nov 4, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



It begins with human intervention selecting the seed sites then the algorithm comes in to play.
[dbpubs.stanford.edu:8090...] and scroll down for the abstract

Thanks for the link! :)

Very interesting. Now instead of using DMOZ why don't Google create a directory of their own which we can pay to be in and get a little bit of magical trust seed from...

I am seriously shocked they haven't thought of generating some revenue with a paid inclusion directory as yet!

This 66 message thread spans 3 pages: 66
 

Featured Threads

Hot Threads This Week

Hot Threads This Month