Welcome to WebmasterWorld Guest from 18.207.132.114

Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google's Knowledge Graph:Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources

     
3:25 pm on Mar 3, 2015 (gmt 0)

Administrator from GB 

WebmasterWorld Administrator engine is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month Best Post Of The Month

joined:May 9, 2000
posts:26469
votes: 1079


When Google's Knowledge Graph first appeared in the SERPs many webmasters and marketers cried foul. In some respects they were justified: Loss of SERPs real estate, and greater emphasis on what Google assesses as "the answer." It wasn't to everyone's liking, but it's the future of knowledge search, so we'd better get used to it.

We're constantly having discussions, both positive and negative, about link development, and how Google's Knowledge Graph used links for credibility.

While many marketers stuck to the premise of building links in their thousands, it's become even clearer that link quality far outweighs link volume, and savvy marketers have been on this route for many years. Webmasters are helping Google improve the quality threshold and it's goal of quality through the disavow tool. If a webmaster agrees to disavow a link, it's confirming Google's assessment. In rare instances, i've disavowed links that were not highlighted by Google, but, upon assessment, were questionable.

The possibility of the SERPs changing from significantly link-based, to greater emphasis on trustworthiness-based might be closer than some may think.

Here's a paper on "Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources"
The quality of web sources has been traditionally evaluated using exogenous signals such as the hyperlink structure of the graph. We propose a new approach that relies on endogenous signals, namely, the correctness of factual information provided by the source. A source that has few false facts is considered to be trustworthy. The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases. We propose a way to distinguish errors made in the extraction process from factual errors in the web source per se, by using joint inference in a novel multi-layer probabilistic model. We call the trustworthiness score we computed Knowledge-Based Trust (KBT). On synthetic data, we show that our method can reliably compute the true trustworthiness levels of the sources. We then apply it to a database of 2.8B facts extracted from the web, and thereby estimate the trustworthiness of 119M webpages. Manual evaluation of a subset of the results confirms the effectiveness of the method. Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources [arxiv.org]


Here's the full PDF file Knowledge-Based Trust: Estimating
the Trustworthiness of Web Sources
[arxiv.org] This really is worth you reading and understanding, and i'm happy to discuss this topic.

I doubt links will ever go entirely from Google's algorithm, but i'm pretty certain that it'll take less and less emphasis as the Knowledge Graph becomes better, and as it moves into other sectors of the SERPs.

There's a great discussion on "How would you know if links were no longer important to Google? [webmasterworld.com]"
By establishing the efficacy and value of links, and their importance to Google, we're starting to be able to identify the impact of any Knowledge-Graph based trust.

Many have talked about TrustRank, Knowledge-Based Trust, for some time, but, surely, this, in whatever form you wish to describe it, is a way to measure and to ultimately to assess a site, and then a page.

We've seen Google's Panda filter hit "thin sites." How about taking the results of that filter, improving it, honing it, and creating a database of the top authorities, then using that to assess whether a link is credible, or trusted. It's not unknown for search engines to have a trusted source of some sort. Looksmart, Inktomi, and even DMOZ played a part in adding some form of trust to the various search engine databases. The idea being if you got into that trusted database you weren't a crash-and-burn site. Those systems failed, for all kinds of different reasons, and not entirely as a result of the Internet and web going through its growing pains. It's still going through those growing pains, and i'm sure there will be continued and ongoing experimentation to provide better quality search, and to deliver what the user needs. Remember, Google wants what the user needs, not the webmaster ranking number 1 in the SERPs.

We're about to see Google initiate its "Smartphone" update, which rolls out on April 21. [webmasterworld.com] It's being tested right now with the labels being applied to smartphone SERPS. If your site appears in the smartphone SERPs with the wrong kind of label, you're going to get a greater impact of that after that date. Now is the time to be working on fixing that.

We're just reaching a new phase in search, imho: These include new ranking signals of knowledge-based trust, and the expansion of Google's Knowledge Graph into other sectors, and the diversification of desktop and smartphone SERPs, which, up until now, have been very similar, and, don't forget local. Local is going to continue to become more important, imho.
10:09 am on Mar 10, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Aug 30, 2002
posts: 2663
votes: 112


Perhaps the real problem for Google, after the Snowden revelations and the series of screwups that killed all those Mom and Pop webstores, is that fewer and fewer people trust them. The sad thing about the spam issue is that it is a lot easier to solve than it appears but the kind of insular thinking in Google doesn't allow them to think outside the box.

Regards...jmcc
This 31 message thread spans 2 pages: 31