Welcome to WebmasterWorld Guest from 220.127.116.11
The result will be that web masters who currently produce fascinating websites will never get visited, and they will just give up.
Google wants to keep the user on Google. Google is looking at niches and aggregating the information and then calling it Google knowledge. Google has no knowledge beyond scrapping web sites and building massive data centers. Google knowledge is actually "YOUR KNOWLEDGE".
How much of a second brain does it take to mashup content and display it on your site?
Today, when you enter a search term into Google, the company kicks off two separate but parallel searches. One runs against the traditional keyword-based Web index, bringing back matches that are ranked by statistical relevance—the familiar “ten blue links.” The other search runs against a much newer database of named entities and relationships.
Type in the query “Philadelphia,” and this second search will produce a new “knowledge panel” in the right-hand margin of the results page, complete with a map and other basic facts about the city William Penn founded. (Hedging its bets, however, Google will also include a thumbnail of the movie poster from the 1993 Tom Hanks film Philadelphia.) To use Google’s own description, the new database helps the search engine understand “things, not strings.”
This second brain is called the Knowledge Graph.
In essence, Google’s engineers are building toward a future when the company’s famous “I’m Feeling Lucky” option is all you need, and the search engine returns the right result the first time, every time.
The knowledge base Metaweb built is called Freebase, and it’s still in operation today. It’s a collaborative database—technically, a semantic graph—that grows through the contributions of volunteers, who carefully specify the properties of each new entity and how it fits into existing knowledge categories.
While Freebase is now hosted by Google, it’s still open to submissions from anyone, and the information in it can be freely reused under a Creative Commons license.
But Giannandrea is careful to point out that Metaweb wasn’t trying to build an AI system. “We explicitly avoided hard problems about reasoning or complicated logic structures,” he says. “We just wanted to build a big enough data set that it could be useful.
something a computer will never be able to do