Trustrank is the admittance that algorithms can't make reliable choices about quality in a commercial real world scenario. ;)
I haven't heard of this trust rank, what is it, how does it apply, and is there a way to find out one's "trust rank"? More info please!
not sure if you can measure trustrank beyond common sense guessing, but how it works is... human editors help search engines combat search engine spam, but reviewing all content is impractical. TrustRank places a core vote of trust on a seed set of reviewed sites to help search engines identify pages that would be considered useful from pages that would be considered spam. This trust is attenuated to other sites through links from the seed sites.
In short: TrustRank is a modified PageRank algorithm, where there sources are selected by hand instead of uniformly distributed over the whole internet. Additionally, there might be negative sources which propagate backwards (distrust).
Thanks for starting this thread. I also ask in the update thread about TrustRank. Maybe we can learn something here. Is TrustRank then just a theoretical idea being discussed as a possible element in the Google Ranking process or is there evidence that this is something they have included or are in the process of including?
you can apply trust rank purely internally too. Not saying they have done im just saying it surprises me nobody brings up the topic of internal trustrank. Of course that would be algo based and not human review based.
Factors such as the pages you link to are given weight. Human evaluated authoraty sites are given more linking power.
It would be interesting to have a program that would tell you how far away you are (how many links away) from obvious authority sites in your industry..
For me it is likely to be microsoft dot com
My definition of TrustRank.
A scheme devised by search engine designers to allow results to be biassed by editors whilst at the same time claiming that the results are generated algorithmically and free from editorial bias.
No you are are correct, that seems to be G's and my definition also!
|My definition of TrustRank. |
Well something that might be of interest regards this.
We are a Google News source site and over the last 2 months we have been up and down like a yo-yo.
EG On Aug 28 we went from getting a few good headlines a day to being top story on everything we ran and remaining on sub headlines for days (traffic went up 150,000 uniques per day). Then overnight we went to never appearing on frontpages or even sub pages. Rather back down the list.
I mailed them and they said auto etc. But next day it started again. Then Sept 22 it dived and hasnot recovered since. Now sites that write up stories based on our story end up way above us. Almost like one person switched us up and then someone had a look and switched us back down.
I kinda thought it might relate somehow to serps. But now I think there must be a manual factor.
For instance earlier this year they changed the way results are displayed, from the latest from anyone to sites with more trust...I guess. We did OK out of this as quite a big site.
I also noticed for a few hours last week we went back to top lining.
Also of interest is that the story itself is given the weight. So even after we bombed on Sept 22 the stories from days before still ranked.
I also wonder if they are applying the technique to serps in some way...
Anyway, I would guess that trustrank is based on either an editorial review or a panel of some sort or maybe it is auto from age of site, links etc. My only caveat would be that since Sept 22 we have added new sources, writers, etc. So quality has gone up, yet SERPS and G News positioning at worst position ever.
The PDF linked to from the first page mentionned "a good seed of 200 websites".
Does anyone have guess to how many websites might actually be used? Do you feel there are a few in each industry?
200 websites obviously narrows it down to very large sites.
The paper also said something about only being able to apply this to entire websites (not individual pages)due to processing power contraints.
Might this partly explain why whole webs have been dissapearing for all searches when individual pages should still come up?
If Google are as smart as they would like us to think they are, in reality, TrustRank is nothing more than disinformation intended to confuse SEOs and the competition (Yahoo and MS).
Fundamentally, it's a concept that is totally unnecessary if other algos are working properly and could get Google into a public relations nightmare if it ever transpired that money was changing hands based upon it.
Having said that, I don't rate Google brainpower as highly as others seem to do so I guess it's a coin toss as to whether it really exists or not.
Incidentally, since we are expected to believe that "Page Rank" is named after Lawrence Page, is there a man called Jedediah Trust working at the Plex these days?
As long as webmasters are able to buy their "authority" links, the idea of trust rank will never succeed.
Is there actually such a thing as TrustRank or are we still just guessing.
Error here. Please see my next post. My first try hung up.
[edited by: DeValle at 2:10 am (utc) on Nov. 2, 2005]
Your reading of this is spot on. Google's idea of ranking web sites according to link popularity was conceived in a vacuum. An academic vacuum. In the real world of a market economy they've had to do some fancy tap dancing to accommodate the basic assumptions of that model.
Academic and noncommercial web sites do generally support the link popularity theory. If a site A is a good one, B will link to A out of courtesy and "pure" intentions. I'm guessing that only a relatively small percentage of those types of sites trade links. One may occasionally read in this forum how webmasters of noncommercial sites are affronted by our crass commercialism.
But the day those first two commercial site owners discovered that trading links would raise SERPs, the Google model broke down. The very act of swapping links to raise SERPs is anathema to the model. They just haven't figured a way to stop it.
Now we have people buying links-- a complete distortion of the Google link popularity model. But didn't I read in this forum a few months back that Google Guy gave a tacit approval to the notion of buying links? Was it because they haven't found a way to combat it yet and don't want to admit it?
I've been reading posts in other places offering that the new Jagger update is the begining of the end for reciprocal links-- specifically because the reciprocal link is a bastardization of the Google link popularity model-- and that the trusted site idea is the way Google will kill reciprocal linking. If that is so, then those webmasters who have managed early to set up their cosy little cross-linking cartels and who are stealthily buying links will stay at the top.
I just don't see how the pure link popularity model can hold in the real world.
|I just don't see how the pure link popularity model can hold in the real world. |
Me either. But on the subject of “Trust Rank”, I’m all for it and it makes great sense. However, I have never seen any evidence of something like that in the results I watch.
"I read in this forum a few months back that Google Guy gave a tacit approval to the notion of buying links? Was it because they haven't found a way to combat it yet and don't want to admit it?"
Also, it would be very difficult in most instances for google to know if a link was purchased or not. For instance, you can buy your way into a directory or, if you're willing to wait a bit, get a free admission. There's no way for a SE to determine just how it is you came by that "democratic" vote.
Relevance is different from PageRank which is different from TrustRank. Webmasters often confuse these 3 things, but they are 3 very different factors, representing 3 totally different things to a search engine.
"Signals of Quality" is also something that Google engineers have discussed. My guess is that Signals of Quality are part of TrustRank, but I'm only guessing. Signals of quality would probably include things that are mentioned in some of Google's patents, like if a web site is hosted by a "trusted host" or the length of time that a web site's domain name is registered for.
Isn't Google in the business of selling links alongside their SERPs?
Why would they look down at others who do the same? (If they do)
I think that a lot of the ideology of google gets in the way of running a competitive business. Google had there ipo and are now responsible to share holders to fight off there considerable competitors.
If you have a potential advertiser that is a global company its hard for them to understand why they are not top of the results for there core business. I guess trustrank helps solve that.
But I would like to see these things done in the algo as I think human interference ruins the results. I still personally doen't see any difference between trusted content and high page rank. Except page rank is algo driven.
Where does this trust rank theory come from. Is it just something people made up or has G ssaid something.
I think it came from one of the patent applications.
Given that Google is the center of its own universe presumably it therefore has a high TrustRank.
Sites running Adwords are human reviewed sites which google links to.
Sites in Froogle are human reviewed sites which google includes in its Froogle directory.
As I understand it these inclusions dont confer any rank of any sort; trust or otherwise?
It seems strange google doesnt in some way use this ready pool of human reviewed-ness.
"If Google are as smart as they would like us to think they are, in reality, TrustRank is nothing more than disinformation intended to confuse SEOs and the competition (Yahoo and MS)."
Possibly, but I would actually interpret this as already being in use to some extent. Already I have seen what I can interpret to be some of it's effects, if only to a small degree.
In it's entirety, it could also be feasible under the following conditions:
Theming from authority sites to outbound links.
Loss of trust rank for authority based on a given scale based on outbound links to unrelated, poor choice sites (thus putting a damper on 'purchased' links.) By this fashion, those who purchased links from sites who were in the business of selling many outbounds, would inevitably be purchasing links with less credibility. Also, the number of outbound links per page, the recieving page theme and site theme, along with the original link site would all play factors in the overall trustrank score.
Manual review of those seeds over a given time to evaluate whether they still are authority sites. As any site can turn sour, so can authority sites and so they must also be reviewed. This would include the possibility of detrusting a site or including others into upon periodic review.
TrustRank has nothing to do with the current changes (update). Google has modified his PageRank algorithm to a TrustRank algorithm years ago.
I disagree actually. Look at the serps only 6 months ago, dominated by unthemed link and reciprocals. If trustrank were engaged at that time, those sites dominating the charts with thousands of unthemed links would have bombed.
Instead, they acheived top ranks.
no, trustrank is arriving and being implemented as we speak imho...
| This 66 message thread spans 3 pages: 66 (  2 3 ) > > |