homepage Welcome to WebmasterWorld Guest from 54.205.254.108
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 66 message thread spans 3 pages: < < 66 ( 1 [2] 3 > >     
exactly what is Trust Rank?
and is it a part of the Jagger update?
annej




msg:704648
 1:07 am on Nov 1, 2005 (gmt 0)

People are asking for more information on trustrank in the long Jagger thread and the questions are getting lost in all the posts. I thought maybe if the topic has it's own thread it would help get people to discuss this.

 

doc_z




msg:704678
 7:36 am on Nov 2, 2005 (gmt 0)

I disagree actually. Look at the serps only 6 months ago, dominated by unthemed link and reciprocals. If trustrank were engaged at that time, those sites dominating the charts with thousands of unthemed links would have bombed.

Theming has nothing to do with TrustRank.

Also, it is senseless to make a decision if TrustRank is applied or not by looking at the SERPs. PageRank, TrustRank or what ever is just one off-page factor. However, there are about 100 factors which influence the SERPs. (There are better methods.)

CainIV




msg:704679
 8:07 am on Nov 2, 2005 (gmt 0)

"Trustrank is only one off page factor."

Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.

"Theming has nothing to do with TrustRank."

Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about

doc_z




msg:704680
 9:42 am on Nov 2, 2005 (gmt 0)

Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.

TrustRank doesn't include hundreds of varaibles. There are only a few parameters within the algorithm.

Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about

The value of links within the TrustRank algorithm is purely based on the linking structure and has nothing to do with theming (which is an on-page property).

BTW, "authority" within the TrustRank concept means "seeds".

soapystar




msg:704681
 9:58 am on Nov 2, 2005 (gmt 0)

the idea of "about 100 factors affect ranking" is pretty antique now. The algo is pretty removed from hardwired factors IMHO.

kaled




msg:704682
 12:23 pm on Nov 2, 2005 (gmt 0)

I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology.

With technology of this sort, whilst it might be possible to trade PR for money, the effect of paid links on actual serps should be quite small if everything is working properly (since each paid link would have to be on a relevant page - very tricky).

Essentially, search engines attempt to rank pages according to relevance. This requires contextual analysis. A TrustRank system requires human editors. This contradicts Google philosophy.

Having said that, a Distrust system could be used to devalue links from bad sites.
NOTE: I did not say anything about applying penalties - I am vehemently opposed to penalties in general.

Kaled.

moftary




msg:704683
 7:21 pm on Nov 2, 2005 (gmt 0)

But isnt the same applied to PageRank also, in terms if "seeds"? There must be a dataset seed of all PR10 sites in order for the PR formula to work.

I think that TrustRank is exactly the same as PageRank with the difference that there wont be a TR toolbar where SEO'ers would buy/sell links according to its value.

walkman




msg:704684
 7:51 pm on Nov 2, 2005 (gmt 0)

it's the same as PageRank, but it counts only when the site givinga link has "trust." Big spammer has PR7 and links to Big Spammer2, he still gives them page rank, but since he has no "trust", no trust is given.

doc_z




msg:704685
 9:03 pm on Nov 2, 2005 (gmt 0)

But isnt the same applied to PageRank also, in terms if "seeds"? There must be a dataset seed of all PR10 sites in order for the PR formula to work.

No, in the classic PR algorithm the "seeds" are uniformly distributed, i.e. there is no set of PR10 pages, but PR10 pages are just a consequence of the linking structure.

JuniorOptimizer




msg:704686
 9:52 pm on Nov 2, 2005 (gmt 0)

What happens when a site you "trust" goes off the deep end and abuses the trust through corruption?

walkman




msg:704687
 9:56 pm on Nov 2, 2005 (gmt 0)

>> What happens when a site you "trust" goes off the deep end and abuses the trust through corruption?

I think the algo can flag them by measuring the % of outgoing "bad" links or something. Then, a reviewer could check them again. I know I read the paper, just don't remember exactly.

Those college students at eval.google I think play that role...

Pradyumna




msg:704688
 5:26 am on Nov 3, 2005 (gmt 0)

Hi All,

The topic sounds to be really good and interesting. But immaging how many keywords are there, lets leave unwanted and lets discuss about the competitive keywords. If would require a huge huge staff and reviews done once are not sufficient as I see atleast a new site every day for highly competitive keyword which would be a spam. I dont think it is possible.

annej




msg:704689
 3:02 pm on Nov 3, 2005 (gmt 0)

Just to clearly define what we are talking about here is the abstract for the original study completed in 2004. To see this go to
[dbpubs.stanford.edu:8090...] and scroll down for the abstract.

Basically they show that by manually identifying less than 200 reputable seed pages they used the link structure of the web to discover other good pages. They claim they could filter out most of the spam on the web this way. So it seems there would be no hand checked sites beyond the seed pages.

Now if Google is really starting to use trust rank in their search results they may have changed the method somewhat. And we don't even know for sure if they have implemented it. The reason we suspect they might have is that Google has registered a trademark for 'trustramk' and have applied for a patent for it.

[edited by: annej at 3:04 pm (utc) on Nov. 3, 2005]

JuniorOptimizer




msg:704690
 3:04 pm on Nov 3, 2005 (gmt 0)

Google marketing the word "Trust" seems a little off-base unless it's a product geared towards the financial industry of something like web certificates.

Imagine how bad the concept of Google "trusting" a website with a big green TrustRank of 9 could be abused!

annej




msg:704691
 3:09 pm on Nov 3, 2005 (gmt 0)

I would think that the 'trusted' pages would have to be a very well kept secret. So I doubt if we would be seeing a trust rank result on our google bar. I seems to me that making it easy to see the page rank of individual pages caused a lot of problems for Google.

But then what do I know about such things.

petehall




msg:704692
 9:40 am on Nov 4, 2005 (gmt 0)

Where does this trust rank theory come from. Is it just something people made up or has G ssaid something.

They made it up.

My basic understanding is that TrustRank is different to PageRank because it takes the age of links into consideration.

New links have little value.

Old links have much value.

New links can be bought in seconds where as old links can't as you can't reverse time!

petehall




msg:704693
 9:46 am on Nov 4, 2005 (gmt 0)

Would you "trust" a recommendation from a company that's been in place for 1 month or would you prefer to "trust" a recommendation from a company that had been in place 10 years?

If someone had been recommending a fine widget store for 5 years you can assume it's a decent resource (or the webmaster never updates his links!).

victor




msg:704694
 10:22 am on Nov 4, 2005 (gmt 0)

Remember the original definition of pagerank is:
PageRank can be thought of as a model of user behavior. We assume there is a "random surfer" who is given a web page at random and keeps clicking on links, never hitting "back" but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank
[www-db.stanford.edu...]

Now that is just plain crazy on the web as it exists today.....Imagine how helpful a city travel guide would be if it started "we assume there is a random tourist who visits streets and back alleys at random"

That random tourist would pretty soon be toes up in the morgue.

On the modern web, people only visit where it is safe to visit; a search engine that invites people to run into dark alleys is a dangerous search engine.

Anything Google does to provide a safe map of the web is of advantage to its users.

walkman




msg:704695
 2:04 pm on Nov 4, 2005 (gmt 0)

>> New links can be bought in seconds where as old links can't as you can't reverse time!

oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.

kaled




msg:704696
 2:44 pm on Nov 4, 2005 (gmt 0)

I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.

Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.

Kaled.

petehall




msg:704697
 2:45 pm on Nov 4, 2005 (gmt 0)

oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.

I used the word bought because that's how people were manipulating PageRank.

Buying in link popularity rather than it occurring naturally.

Perhaps it's unlikely someone would pay ££££ for links if they didn't see any benefit from them for at least 6-12 months... most people would just presume they didn't help rankings at all.

petehall




msg:704698
 2:49 pm on Nov 4, 2005 (gmt 0)

I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.
Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.

So if you gave me a brand new domain name and we pointed all the links from an existing site to it, then it would rank pretty much immediately?

Because if age is not a factor then that would be the case.

walkman




msg:704699
 2:59 pm on Nov 4, 2005 (gmt 0)

>> I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.

that's not hard to do at all. They scan page a and notice and link to page b. They just note that.

doc_z




msg:704700
 3:34 pm on Nov 4, 2005 (gmt 0)

Who is saying that TrustRank takes the age of links into account?!
TrustRank uses the standard transition matrix.

petehall




msg:704701
 4:37 pm on Nov 4, 2005 (gmt 0)

Who is saying that TrustRank takes the age of links into account?!

I am, based on the patent information below.



Information retrieval based on historical data.
A system identifies a document and obtains one or more types of history data associated with the document. The system may generate a score for the document based, at least in part, on the one or more types of history data.
United States Patent Application 20050071741 [appft1.uspto.gov]


doc_z




msg:704702
 5:27 pm on Nov 4, 2005 (gmt 0)

I thought we’re talking about the TrustRank algorithm...

petehall




msg:704703
 5:33 pm on Nov 4, 2005 (gmt 0)

...whereas I thought people were trying to say there was no algorithm and that the system relied on human intervention.

annej




msg:704704
 6:04 pm on Nov 4, 2005 (gmt 0)

It begins with human intervention selecting the seed sites then the algorithm comes in to play.

[dbpubs.stanford.edu:8090...] and scroll down for the abstract

But Google may not be implementing it this way, in fact we aren't certain if they are using it yet.

ownerrim




msg:704705
 6:13 pm on Nov 4, 2005 (gmt 0)

"I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology."

This seems like a good way to evaluate the relevancy of a link. And it reinforces the notion of getting links from relevant sub-sections of a directory and getting links via articles submissions.(i.e., even if this is a way to improve relevancy, it's already been gamed.)

MrSpeed




msg:704706
 6:56 pm on Nov 4, 2005 (gmt 0)

I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology.

This technology also exists in Adwords. You can now enter in a url and it returns a list of relevant keywords based on the content of the page. It works very well and sometimes it is even clever enough to know what terms are related.

I used to scoff at the the various theories tossed around that discussed ranking algorithms. They seemed like they were WAY too processor intensive to be practical.

However it does appear there is something going on that is able to determine the value of a link and assign it a weighting factor. Perhaps they can establish the theme of a site where the link is coming from or they do look at the surrounding text.

petehall




msg:704707
 7:50 pm on Nov 4, 2005 (gmt 0)

It begins with human intervention selecting the seed sites then the algorithm comes in to play.
[dbpubs.stanford.edu:8090...] and scroll down for the abstract

Thanks for the link! :)

Very interesting. Now instead of using DMOZ why don't Google create a directory of their own which we can pay to be in and get a little bit of magical trust seed from...

I am seriously shocked they haven't thought of generating some revenue with a paid inclusion directory as yet!

freaky




msg:704708
 9:08 pm on Nov 4, 2005 (gmt 0)

TrustRank Algorithm

A buddy of mine pointed me to a white paper by Zoltan Gyongyi, Hector Garcia-Molina, & Jan Pederson about a concept called TrustRank.

Human editors help search engines combat search engine spam, but reviewing all content is impractical. TrustRank places a core vote of trust on a seed set of reviewed sites to help search engines identify pages that would be considered useful from pages that would be considered spam. This trust is attenuated to other sites through links from the seed sites.

TrustRank can be use to

automatically boost pages that have a high probablility of being good, as well as demote the rankings of pages that have a high probability of being bad.

help search engines identify what pages should be good canidates for quality review

Some common ideas that TrustRank is based upon:

Good pages rarely link to bad ones. Bad pages often link to good ones in an attempt to improve hub scores.

The care with which people add links to a page is often inversely proportional to the number of links on the page.

Trust score is attenuated as it passes from site to site.

To select seed sites they looked for sites which link to many other sites. DMOZ clones and other similar sites created many non useful seed sites.

Sites which were not listed in any of the major directories were removed from the seed set, of the remaining sites only sites which were backed by government, educational, or corporate bodies were accepted as seed sites.

When deciding what sites to review it is mostly important to identify high PR spam sites since they will be more likely to show in the results and because it would be too expensive to closely monitor the tail.

TrustRank can be bolted onto PageRank to significantly improve search relevancy.

Posted by Aaron Wall (of SEO Book.com) at February 7, 2005 04:03 AM

Ref. [seobook.com...]

This 66 message thread spans 3 pages: < < 66 ( 1 [2] 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved