Forum Moderators: Robert Charlton & goodroi
I disagree actually. Look at the serps only 6 months ago, dominated by unthemed link and reciprocals. If trustrank were engaged at that time, those sites dominating the charts with thousands of unthemed links would have bombed.
Theming has nothing to do with TrustRank.
Also, it is senseless to make a decision if TrustRank is applied or not by looking at the SERPs. PageRank, TrustRank or what ever is just one off-page factor. However, there are about 100 factors which influence the SERPs. (There are better methods.)
Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.
"Theming has nothing to do with TrustRank."
Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about
Yes, but it includes possibly hundreds of varaibles, many are which the off page factors you are talking about.
TrustRank doesn't include hundreds of varaibles. There are only a few parameters within the algorithm.
Not in establshing which sites are designted as seeds, but it sure does in terms of the value of a prescribed link from an authority site to a non authority site, and this is precisely what we are talking about
The value of links within the TrustRank algorithm is purely based on the linking structure and has nothing to do with theming (which is an on-page property).
BTW, "authority" within the TrustRank concept means "seeds".
With technology of this sort, whilst it might be possible to trade PR for money, the effect of paid links on actual serps should be quite small if everything is working properly (since each paid link would have to be on a relevant page - very tricky).
Essentially, search engines attempt to rank pages according to relevance. This requires contextual analysis. A TrustRank system requires human editors. This contradicts Google philosophy.
Having said that, a Distrust system could be used to devalue links from bad sites.
NOTE: I did not say anything about applying penalties - I am vehemently opposed to penalties in general.
Kaled.
I think that TrustRank is exactly the same as PageRank with the difference that there wont be a TR toolbar where SEO'ers would buy/sell links according to its value.
But isnt the same applied to PageRank also, in terms if "seeds"? There must be a dataset seed of all PR10 sites in order for the PR formula to work.
No, in the classic PR algorithm the "seeds" are uniformly distributed, i.e. there is no set of PR10 pages, but PR10 pages are just a consequence of the linking structure.
I think the algo can flag them by measuring the % of outgoing "bad" links or something. Then, a reviewer could check them again. I know I read the paper, just don't remember exactly.
Those college students at eval.google I think play that role...
The topic sounds to be really good and interesting. But immaging how many keywords are there, lets leave unwanted and lets discuss about the competitive keywords. If would require a huge huge staff and reviews done once are not sufficient as I see atleast a new site every day for highly competitive keyword which would be a spam. I dont think it is possible.
Basically they show that by manually identifying less than 200 reputable seed pages they used the link structure of the web to discover other good pages. They claim they could filter out most of the spam on the web this way. So it seems there would be no hand checked sites beyond the seed pages.
Now if Google is really starting to use trust rank in their search results they may have changed the method somewhat. And we don't even know for sure if they have implemented it. The reason we suspect they might have is that Google has registered a trademark for 'trustramk' and have applied for a patent for it.
[edited by: annej at 3:04 pm (utc) on Nov. 3, 2005]
But then what do I know about such things.
Where does this trust rank theory come from. Is it just something people made up or has G ssaid something.
They made it up.
My basic understanding is that TrustRank is different to PageRank because it takes the age of links into consideration.
New links have little value.
Old links have much value.
New links can be bought in seconds where as old links can't as you can't reverse time!
If someone had been recommending a fine widget store for 5 years you can assume it's a decent resource (or the webmaster never updates his links!).
Now that is just plain crazy on the web as it exists today.....Imagine how helpful a city travel guide would be if it started "we assume there is a random tourist who visits streets and back alleys at random"
That random tourist would pretty soon be toes up in the morgue.
On the modern web, people only visit where it is safe to visit; a search engine that invites people to run into dark alleys is a dangerous search engine.
Anything Google does to provide a safe map of the web is of advantage to its users.
oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.
Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.
Kaled.
oh great. So this week alone I "bought" a PC World Mag, Wash Post (sort of a reprint of the PCWorld) and a nice .edu link. Not to mention that Yahoo & MSN, among others, republish them.
I used the word bought because that's how people were manipulating PageRank.
Buying in link popularity rather than it occurring naturally.
Perhaps it's unlikely someone would pay ££££ for links if they didn't see any benefit from them for at least 6-12 months... most people would just presume they didn't help rankings at all.
I don't want to burst bubbles but I have doubts about Google's ability to differentiate old links from new.
Google cannot distinguish duplicate pages from originals and often removes the wrong page. If Google does not keep (reliable) age information for whole pages, I doubt they do so for individual links.
So if you gave me a brand new domain name and we pointed all the links from an existing site to it, then it would rank pretty much immediately?
Because if age is not a factor then that would be the case.
that's not hard to do at all. They scan page a and notice and link to page b. They just note that.
Who is saying that TrustRank takes the age of links into account?!
I am, based on the patent information below.
Information retrieval based on historical data.
A system identifies a document and obtains one or more types of history data associated with the document. The system may generate a score for the document based, at least in part, on the one or more types of history data.
United States Patent Application 20050071741 [appft1.uspto.gov]
[dbpubs.stanford.edu:8090...] and scroll down for the abstract
But Google may not be implementing it this way, in fact we aren't certain if they are using it yet.
This seems like a good way to evaluate the relevancy of a link. And it reinforces the notion of getting links from relevant sub-sections of a directory and getting links via articles submissions.(i.e., even if this is a way to improve relevancy, it's already been gamed.)
I have seen compelling examples that appear to show that the text surrounding a link is used not just the link text. It is therefore likely that overall page content also plays some part when evaluating links. Clearly, Google have the technology to implement this - it would simply be a reuse of adsense technology.
This technology also exists in Adwords. You can now enter in a url and it returns a list of relevant keywords based on the content of the page. It works very well and sometimes it is even clever enough to know what terms are related.
I used to scoff at the the various theories tossed around that discussed ranking algorithms. They seemed like they were WAY too processor intensive to be practical.
However it does appear there is something going on that is able to determine the value of a link and assign it a weighting factor. Perhaps they can establish the theme of a site where the link is coming from or they do look at the surrounding text.
It begins with human intervention selecting the seed sites then the algorithm comes in to play.
[dbpubs.stanford.edu:8090...] and scroll down for the abstract
Thanks for the link! :)
Very interesting. Now instead of using DMOZ why don't Google create a directory of their own which we can pay to be in and get a little bit of magical trust seed from...
I am seriously shocked they haven't thought of generating some revenue with a paid inclusion directory as yet!