homepage Welcome to WebmasterWorld Guest from 54.146.175.204
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 53 message thread spans 2 pages: < < 53 ( 1 [2]     
"Trustrank" Coming Soon?
Should we get ready for some big changes?
WebFusion

10+ Year Member



 
Msg#: 29209 posted 1:19 pm on Apr 26, 2005 (gmt 0)

"TrustRank" was filed with the USPTO about a month ago. Interestingly, members of the Stanford Database Group have written a paper about the use of "TrustRank" to combat web spam that we blogged about in early March. Makes you wonder if the implementation of TrustRankô will be something coming soon from the GooglePlex. Stay tuned.

[blog.searchenginewatch.com...]
[webmasterworld.com...]

 

flyerguy

10+ Year Member



 
Msg#: 29209 posted 12:36 am on Apr 30, 2005 (gmt 0)

SEO is more sociology than comp science. That's why no one surfing the 'insider' forums of the net are ever going to find the holy grail: code and human nature don't mix.

Trustrank sounds like a laxative brand name, police ethics cannot be applied to data, just ask the borg.

BillyS

WebmasterWorld Senior Member billys us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 29209 posted 1:31 am on Apr 30, 2005 (gmt 0)

PFI may be practical for commercial sites, but what about the vast numbers of .edu, .org, .gov, open-source, and labor-of-love sites that wouldn't shell out for (and probably wouldn't be aware of) a fee-based QC program?

Doesn't matter. Do any of these sites focus on PageRank today? Just because the site is .edu does that make it any better than a .com site?

I've seen lots of bad .orgs and many labor of love sites. Do they seek out PageRank? No one is forcing anyone to shell out money in this model. There are no guarantees. If the quality standards are tough enough perhaps no additional points will be gained - perhaps there could be a downside too. Perhaps these sites are the standards against which others are rated... This is a concept.

And, just like today, PageRank is not the end all, it is just one factor (albeit a big one). You can call it PageRank or TrustRank, without someone actually looking at a site, both of these approaches are subject to gaming.

Let's walk through a quick example of how TrustRank might work... Perhaps an .edu is assigned as a seed. Does that mean all of the student junk (and a lot of it really is) is good? Links to sites where you can steal software or music... Should students be relied upon to vote for other sites? Have you ever tried to get a link from an .edu - I have. I have about 100 articles on my site that are targeted to college students (helping them find jobs out of school, for example). I've made hundreds of requests and I've gotten three links.

The mistake that these people make is that they conduct their experiment today, then they say - Ahh look the results are better than PageRank. Sure they are, that's because spammers understand how PageRank works and that has already tainted the results. To make a fair comparison, they would have to test against a dataset that existed before PageRank existed. When the game changes to TrustRank, guess what? Smart people will figure out how to game that too.

This is the mistake they are making. Sure it produces better results NOW, but over the long haul we are right back to this PageRank problem.

I am surprised at Google, but I understand why this happens because I work at a large company, just like Google is today. They are afraid to let go of a concept that once worked. The emperor has no clothes and no one wants to say it out loud. The idea was a good concept until the web was monetized. The rules of engagement have changed, but Google holds onto the past. You cannot fight this new war with old weapons; the result will be the same.

Albert Einstein:
What is insanity? Doing the same thing over and over again and expecting different results.

europeforvisitors



 
Msg#: 29209 posted 2:35 am on Apr 30, 2005 (gmt 0)

Doesn't matter. Do any of these sites focus on PageRank today? Just because the site is .edu does that make it any better than a .com site?

I've seen lots of bad .orgs and many labor of love sites. Do they seek out PageRank? No one is forcing anyone to shell out money in this model

The point isn't whether an .edu site is better than a .com site or vice versa, but that an .edu site (to use just one example) isn't going to be shelling out PFI fees the way commercial sites will. Google's stated mission is "to organize the world's information and make it universally accessible," and giving special treatment to sites that pay for reviews would compromise Google's value to users (who, after all, provide the eyeballs that pay the bills).

In any case, if TrustRank is about using a relatively small number of handpicked "seed sites" to jumpstart the QC process, there's no need for (or any point in having) a PFI program.

BeeDeeDubbleU

WebmasterWorld Senior Member beedeedubbleu us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 29209 posted 7:07 am on Apr 30, 2005 (gmt 0)

Perhaps an .edu is assigned as a seed.

And perhaps not. Perhaps they will put a bit of thought into what they use as seed material? ;)

doc_z

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 29209 posted 12:04 pm on Apr 30, 2005 (gmt 0)

Google uses PR seeds for years.

europeforvisitors



 
Msg#: 29209 posted 3:38 pm on Apr 30, 2005 (gmt 0)

Please explain to me how this is REALLY any different than PageRank.

From the information we've seen, it would appear to be an extension and improvement on PageRank--i.e., PageRank with a dose of quality control built in.

It is still a system that relies on votes and this can be easily manipulated. You can buy links from a high PR site today, you can buy TrustRank from a seed tomorrow.

Google would probably disagree with your use of the word "easily." The goal, obviously, would be to pick seed sites that don't sell links. Does anyone here seriously believe that FORBES would auction off its "Best of the Web" links or that PC Magazine would sell its "Top 100 Web sites" links to the highest bidder? Or that academic librarians at top universities are going to take money under the table for links to bobs-discount-hotels-and-scraped-adsense-links.com?

I think the concept is interesting; I'm more concerned with whether it would actually work. Putting theory into practice isn't always easy, as anyone can see from looking at Google's SERPs.

europeforvisitors



 
Msg#: 29209 posted 4:28 pm on Apr 30, 2005 (gmt 0)

Perfect response, I love it. Have you ever tried to get a link from Forbes, PC Magazine or a top university? They don't link and they don't care about links.

I've got two inbound links from FORBES and inbound links from a number of university libraries, so your statement that "they don't link and they don't care about links" is contrary to my own experience.

If your website makes PC Magazine's Top 100 - might you be willing to sell links?

I wouldn't be, but some might.

You can try to convince yourself that TrustRank is better - it's not because it is really the same thing.

It isn't "the same thing," it's an extension of the PageRank concept.

TrustRank will fall faster than PageRank because everyone has already figured out how to manipulate PageRank. This experience will accelerate the fall of TrustRank.

As I understand the TrustRank concept, it will be harder to manipulate than PageRank because of the greater weight given to links from trusted sites.

Besides, your suggestion makes no sense at all. Why in the world would Google rely on Forbes or PC Magazine to pick websites? Google could do that themselves (like I said, they should create their own directory...)

Sounds like they don't want to. It's their search engine, so they get to decide. :-)

Boaz

10+ Year Member



 
Msg#: 29209 posted 6:38 pm on Apr 30, 2005 (gmt 0)

Actually IMO it just may be that Trustrank has been used as a factor for some time now. Just think about this: isn't Trustrank as good an explanation as any why lots of times lower PR pages rank better than high PR pages? (I am not talking about those cases where after a toolbar PR update one sees that they actually had a higher PR) Sure, one can shout "PR is dead" as much as one wants, but this explanation makes more sense to me. Trustrank already being used may actually explain a number of other phenomenons observed as well.

fearlessrick

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 29209 posted 4:07 am on May 28, 2005 (gmt 0)

PFI may be practical for commercial sites, but what about the vast numbers of .edu, .org, .gov, open-source, and labor-of-love sites that wouldn't shell out for (and probably wouldn't be aware of) a fee-based QC program?

This is why god created salesmen.

zulufox

10+ Year Member



 
Msg#: 29209 posted 4:56 am on May 28, 2005 (gmt 0)

Bring it on :)

MikeNoLastName

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 29209 posted 9:01 pm on May 30, 2005 (gmt 0)

Perhaps there needs to be two (or more) selectable search engine result modes. One for people looking strictly for educational information (i.e. ban all sites selling something or even linking to sellers) and another strictly for commercial/retail sites for people LOOKING to buy something right now. That way the retailers compete only with afiliates and labor of love sites compete only with educational.
This happens in many other applications. You have network TV vs Educational TV stations and pay-per-view. You have academia (more or less) separated from industry and marketing ploys. Church vs. state. Kiddie vs. Adult movies and restaurants. Smoking vs. non-smoking, etc.
Things just naturally seem to work out better in the long run that way and everyone seems to get along.

jimbeetle

WebmasterWorld Senior Member jimbeetle us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 29209 posted 9:48 pm on May 30, 2005 (gmt 0)

Perhaps there needs to be two (or more) selectable search engine result modes. One for people looking strictly for educational information (i.e. ban all sites selling something or even linking to sellers) and another strictly for commercial/retail sites for people LOOKING to buy something right now

This Yahoo Mindset [webmasterworld.com] thread will be of great interest to you then.

mrMister

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 29209 posted 2:38 pm on Jun 1, 2005 (gmt 0)

Perhaps there needs to be two (or more) selectable search engine result modes.

You'd always have to have a combined search though, and that would be the default, which is what the vast majority of users would use.

Take regional serching for example

europeforvisitors



 
Msg#: 29209 posted 4:16 pm on Jun 1, 2005 (gmt 0)

You'd always have to have a combined search though

Why? There could be a single index with the results being weighted according to the user's indicated preference, e.g.:

"I want information about:"

"I want to buy:"

Same index, same pages on the SERPs, but a subtle reordering of the results so that (for example) someone entering "I want information about [Widgetco WX-1 camera]" would see manufacturer pages and reviews on the first few pages and someone entering "I want to buy [Widgetco WX-1 camera]" would get dealer and affiliate pages at the top of the list.

This wouldn't keep "commercial" sites from having "information" pages or vice versa; the goal would be to deliver pages that were relevant to the user's needs, regardless of their source.

longcall911

5+ Year Member



 
Msg#: 29209 posted 5:07 pm on Jun 1, 2005 (gmt 0)

My preference would be to have 2 different search buttons. One is labeled "Search for information" the other is "Search for Products and Services".

It would not be necessary to exclude one, but rather, provide the appropriate results first.

vabtz



 
Msg#: 29209 posted 5:12 pm on Jun 1, 2005 (gmt 0)

My preference would be to have 2 different search buttons. One is labeled "Search for information" the other is "Search for Products and Services".

Google is an information search engine.
Google released Froogle a shopping search engine.

violia! all your problems solved!
$1349.00 please.. and please try and remember I only accept American dollars.

BeeDeeDubbleU

WebmasterWorld Senior Member beedeedubbleu us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 29209 posted 8:53 am on Jun 2, 2005 (gmt 0)

Joe Surfer doesn't want two search engines. Choosing the type of results that you want from a single search engine is the real solution.

It's a no brainer. All they have to do is put a couple or more selectors or radio buttons at the top of the page that let you screen out the stuff you don't want to see.

mickeymart

5+ Year Member



 
Msg#: 29209 posted 9:00 am on Jun 2, 2005 (gmt 0)

google itself is turning into junk... i doubt any new initiatives will help rather than hurt their situation.

If trustrank is what I think it is, it can still be gamed by humans, and thus is no solution at all.

mrMister

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 29209 posted 9:16 am on Jun 2, 2005 (gmt 0)

Why? There could be a single index with the results being weighted according to the user's indicated preference, e.g.:

"I want information about:"

"I want to buy:"

Most of the time I want both.

I don't subscribe to the idea that an online store doesn't provide information about their products. I use Amazon.com for information as much as I do for buying.

If I were forced to limit my search I would go elsewhere for my search engine needs.

BeeDeeDubbleU

WebmasterWorld Senior Member beedeedubbleu us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 29209 posted 9:19 am on Jun 2, 2005 (gmt 0)

You wouldn't have to limit your search. This would be an option.

helleborine

10+ Year Member



 
Msg#: 29209 posted 12:04 pm on Jun 2, 2005 (gmt 0)

I think TrustRank is too difficult to implement. How much to you reward each little hint of trustworthiness? It gets touchy-feely and gobbles up too much computing power.

MIStrust Rank, on the other hand, is simple. The algo can impose a -150 rank penalty to anything it doesn't like.

Bourbon!

europeforvisitors



 
Msg#: 29209 posted 1:57 pm on Jun 2, 2005 (gmt 0)

I don't subscribe to the idea that an online store doesn't provide information about their products.

It's all a matter of emphasis and the user's mindset. The solution that I suggested involved weighting of the index, not splitting the indexes down the middle. So theres no reason why, for example, an REI page on types of canoes wouldn't fall on the information side, and a canoe sell page with a shopping-cart button couldn't fall on the commercial side.

It's possible that the "how" of the solution isn't as important as having a solution. The Web is far bigger today than it was when Google and the other major search engines were created, so it makes sense to help searchers prequalify or presort the results for a given search. The current approach is simply too unwieldy for searches that yield many thousands or even millions of results.

aleksl



 
Msg#: 29209 posted 7:56 pm on Jun 2, 2005 (gmt 0)

I second BillyS, Google$ is just not innovating.

I just went through the example of TrustRank calculation, and even there a "bad" page that is linked to by a "good" page is ranked higher then that "good" page. Here you go - a number of "good" sites get destroyed in the process of weeding out spam. And a good few spammers become Trusted. Sounds like someone had too many Allegra+Bourbon shots.

All spammers have to do is what they do now to break the PageRank - spam for "good" links.

This 53 message thread spans 2 pages: < < 53 ( 1 [2]
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved