Forum Moderators: Robert Charlton & goodroi
"TrustRank" was filed with the USPTO about a month ago. Interestingly, members of the Stanford Database Group have written a paper about the use of "TrustRank" to combat web spam that we blogged about in early March. Makes you wonder if the implementation of TrustRank™ will be something coming soon from the GooglePlex. Stay tuned.
[blog.searchenginewatch.com...]
[webmasterworld.com...]
PFI may be practical for commercial sites, but what about the vast numbers of .edu, .org, .gov, open-source, and labor-of-love sites that wouldn't shell out for (and probably wouldn't be aware of) a fee-based QC program?
Doesn't matter. Do any of these sites focus on PageRank today? Just because the site is .edu does that make it any better than a .com site?
I've seen lots of bad .orgs and many labor of love sites. Do they seek out PageRank? No one is forcing anyone to shell out money in this model. There are no guarantees. If the quality standards are tough enough perhaps no additional points will be gained - perhaps there could be a downside too. Perhaps these sites are the standards against which others are rated... This is a concept.
And, just like today, PageRank is not the end all, it is just one factor (albeit a big one). You can call it PageRank or TrustRank, without someone actually looking at a site, both of these approaches are subject to gaming.
Let's walk through a quick example of how TrustRank might work... Perhaps an .edu is assigned as a seed. Does that mean all of the student junk (and a lot of it really is) is good? Links to sites where you can steal software or music... Should students be relied upon to vote for other sites? Have you ever tried to get a link from an .edu - I have. I have about 100 articles on my site that are targeted to college students (helping them find jobs out of school, for example). I've made hundreds of requests and I've gotten three links.
The mistake that these people make is that they conduct their experiment today, then they say - Ahh look the results are better than PageRank. Sure they are, that's because spammers understand how PageRank works and that has already tainted the results. To make a fair comparison, they would have to test against a dataset that existed before PageRank existed. When the game changes to TrustRank, guess what? Smart people will figure out how to game that too.
This is the mistake they are making. Sure it produces better results NOW, but over the long haul we are right back to this PageRank problem.
I am surprised at Google, but I understand why this happens because I work at a large company, just like Google is today. They are afraid to let go of a concept that once worked. The emperor has no clothes and no one wants to say it out loud. The idea was a good concept until the web was monetized. The rules of engagement have changed, but Google holds onto the past. You cannot fight this new war with old weapons; the result will be the same.
Albert Einstein:
What is insanity? Doing the same thing over and over again and expecting different results.
Doesn't matter. Do any of these sites focus on PageRank today? Just because the site is .edu does that make it any better than a .com site?I've seen lots of bad .orgs and many labor of love sites. Do they seek out PageRank? No one is forcing anyone to shell out money in this model
The point isn't whether an .edu site is better than a .com site or vice versa, but that an .edu site (to use just one example) isn't going to be shelling out PFI fees the way commercial sites will. Google's stated mission is "to organize the world's information and make it universally accessible," and giving special treatment to sites that pay for reviews would compromise Google's value to users (who, after all, provide the eyeballs that pay the bills).
In any case, if TrustRank is about using a relatively small number of handpicked "seed sites" to jumpstart the QC process, there's no need for (or any point in having) a PFI program.
Please explain to me how this is REALLY any different than PageRank.
From the information we've seen, it would appear to be an extension and improvement on PageRank--i.e., PageRank with a dose of quality control built in.
It is still a system that relies on votes and this can be easily manipulated. You can buy links from a high PR site today, you can buy TrustRank from a seed tomorrow.
Google would probably disagree with your use of the word "easily." The goal, obviously, would be to pick seed sites that don't sell links. Does anyone here seriously believe that FORBES would auction off its "Best of the Web" links or that PC Magazine would sell its "Top 100 Web sites" links to the highest bidder? Or that academic librarians at top universities are going to take money under the table for links to bobs-discount-hotels-and-scraped-adsense-links.com?
I think the concept is interesting; I'm more concerned with whether it would actually work. Putting theory into practice isn't always easy, as anyone can see from looking at Google's SERPs.
Perfect response, I love it. Have you ever tried to get a link from Forbes, PC Magazine or a top university? They don't link and they don't care about links.
I've got two inbound links from FORBES and inbound links from a number of university libraries, so your statement that "they don't link and they don't care about links" is contrary to my own experience.
If your website makes PC Magazine's Top 100 - might you be willing to sell links?
I wouldn't be, but some might.
You can try to convince yourself that TrustRank is better - it's not because it is really the same thing.
It isn't "the same thing," it's an extension of the PageRank concept.
TrustRank will fall faster than PageRank because everyone has already figured out how to manipulate PageRank. This experience will accelerate the fall of TrustRank.
As I understand the TrustRank concept, it will be harder to manipulate than PageRank because of the greater weight given to links from trusted sites.
Besides, your suggestion makes no sense at all. Why in the world would Google rely on Forbes or PC Magazine to pick websites? Google could do that themselves (like I said, they should create their own directory...)
Sounds like they don't want to. It's their search engine, so they get to decide. :-)
Perhaps there needs to be two (or more) selectable search engine result modes. One for people looking strictly for educational information (i.e. ban all sites selling something or even linking to sellers) and another strictly for commercial/retail sites for people LOOKING to buy something right now
This Yahoo Mindset [webmasterworld.com] thread will be of great interest to you then.
You'd always have to have a combined search though
Why? There could be a single index with the results being weighted according to the user's indicated preference, e.g.:
"I want information about:"
"I want to buy:"
Same index, same pages on the SERPs, but a subtle reordering of the results so that (for example) someone entering "I want information about [Widgetco WX-1 camera]" would see manufacturer pages and reviews on the first few pages and someone entering "I want to buy [Widgetco WX-1 camera]" would get dealer and affiliate pages at the top of the list.
This wouldn't keep "commercial" sites from having "information" pages or vice versa; the goal would be to deliver pages that were relevant to the user's needs, regardless of their source.
My preference would be to have 2 different search buttons. One is labeled "Search for information" the other is "Search for Products and Services".
Google is an information search engine.
Google released Froogle a shopping search engine.
violia! all your problems solved!
$1349.00 please.. and please try and remember I only accept American dollars.
It's a no brainer. All they have to do is put a couple or more selectors or radio buttons at the top of the page that let you screen out the stuff you don't want to see.
Why? There could be a single index with the results being weighted according to the user's indicated preference, e.g.:"I want information about:"
"I want to buy:"
Most of the time I want both.
I don't subscribe to the idea that an online store doesn't provide information about their products. I use Amazon.com for information as much as I do for buying.
If I were forced to limit my search I would go elsewhere for my search engine needs.
I don't subscribe to the idea that an online store doesn't provide information about their products.
It's all a matter of emphasis and the user's mindset. The solution that I suggested involved weighting of the index, not splitting the indexes down the middle. So theres no reason why, for example, an REI page on types of canoes wouldn't fall on the information side, and a canoe sell page with a shopping-cart button couldn't fall on the commercial side.
It's possible that the "how" of the solution isn't as important as having a solution. The Web is far bigger today than it was when Google and the other major search engines were created, so it makes sense to help searchers prequalify or presort the results for a given search. The current approach is simply too unwieldy for searches that yield many thousands or even millions of results.
I just went through the example of TrustRank calculation, and even there a "bad" page that is linked to by a "good" page is ranked higher then that "good" page. Here you go - a number of "good" sites get destroyed in the process of weeding out spam. And a good few spammers become Trusted. Sounds like someone had too many Allegra+Bourbon shots.
All spammers have to do is what they do now to break the PageRank - spam for "good" links.