|Google's Second Brain - The Knowledge Graph and the Evolution of Search|
“We all want the Star Trek computer, and what we are building is exactly in that direction, where you can ask it anything and it will proactively tell you things,” says [Amit] Singhal. “That is where this is headed. The Knowledge Graph is one of those key components that are necessary but not sufficient to build it. Likewise with speech recognition. When you put the whole package together, that is how you arrive at the future—and we at Google are far ahead in all of those spaces.”
Google Gets A Second Brain, Changing Everything About Search [xconomy.com]
An in depth look at what's behind the Knowledge Graph and the direction in which Google is moving. *Don't* put this on your TL;DR list, it's important stuff.
Very interesting link.
I just wonder if the one bit of intelligence google is overlooking is the user. My experience is that 10 years ago if a user was looking for a job, they would type in "Jobs". Now they do far more sophisticated searches because they understand the way search works a bit more, and type in, "part time shop assistant job in Plymouth".
Using Amit Singhal's own example, "Taj Mahal", if a user wants the restaurant around the corner, surely they now understand that they need to put in "Taj Mahal restaurant in Ascot" or else they may get the temple in India.
It begs the question, why put all this effort into trying to guess what people may want or need from minimal information, when you could just give them a better understanding of how the current search systems work, and encourage users to define their search a bit better.
I suspect that this is all about providing the actual information you are looking for in considerable depth on the google search page..... disguised as 'improving the search experience'. If they succeed, many sites will never need to be visited at all. If you do the "Taj Mahal" search, the knowledge box tells you the height, phone number, architect etc etc which probably is what many people are initially looking for. The result will be that web masters who currently produce fascinating websites will never get visited, and they will just give up.
The danger for google is that they are straying into turning their search results pages into topic information pages. Is that what the users really want?
Lately I keep remembering that, from the beginning, Google said their mission was to "organize the world's information." They never said their mission was to run a great search engine. Apparently that was just their first step.
|“We are building our dream search engine,” says Amit Singhal. “We are guiding the company toward the best experience for our users, and we are not really paying attention to whether people will click more on ads or less on ads.” |
Is he on planet earth? Has he looked at the SERPS lately?
How much of a second brain does it take to mashup content and display it on your site? Does this mean scrapers who do this very thing already are as smart as Google? Sorry, but I still see aggregation from wikipedia et al.
To be fair I looked up Philadelphia and what I learned is that I should make sure to write about things Google won't likely aggregate any time soon, otherwise I will get little traffic from Google.
Amit Singhal is indeed dreaming if he thinks he can essentially steal content and claim the users of that content to be his. When the owners of that content receive no benefit from Google for having been scraped I imagine they will see Google as they do any other scraper site. Why doesn't Google understand they can't have all of the worlds content without creating any of it themselves?
|And, just as Dr. Hfuhruhurr did, you’re probably going to like the new version a lot better. |
Didn't Dr. Hfuhruhurr's wife with the new brain turn out to get incredibly fat? Fitting.
|The result will be that web masters who currently produce fascinating websites will never get visited, and they will just give up. |
That's a bit chicken and egg because without those sites - or at least, the people with the knowledge behind them - Google won't be able to get the new content it then needs to evolve.
Plus, if it wants to be sure it is an accurate knowledge engine, it needs to be able to verify the information it presents is accurate.
I dunno - to me, common sense dictates they won't be dispensing with our websites any time soon but hey, I've been wrong before. I don't see the harm in them trying to push boundaries to aid evolution, however in my opinion, with current technology, their goal is many, many years and possibly several "internal restructures" away.
|brotherhood of LAN|
Google has employed a futurist who perhaps ties in nicely with this topic:
Ray Kurzweil (the "Singularity" guy) joins Google [arstechnica.com]
I'm with Tedster on his idea of Google's mission. Maybe they will need external commercial sources for info, or the net at large in order to continue providing a useful front-end-with-ads, maybe not. Freebase/knowledge is good for facts, but to be fair the explosion of sites over the past few years has been more social, or that of opinion.
In the end, there will always be a need to search for information online, and there will always be a need to connect with someone/something outwith Google.
This is how I see it.....
Google wants to keep the user on Google. Google is looking at niches and aggregating the information and then calling it Google knowledge. Think about that. "Google Knowledge". Google has no knowledge beyond scrapping web sites and building massive data centers. Google knowledge is actually "YOUR KNOWLEDGE".
Right now there is a symbiotic relationship. We produce the content and allow Google to scrap it so that we get Google to send the visitor on to us so that we might make some coin from the visitor in one way or another.
If Google keeps those visitors to itself, where then is the symbiotic relationship that has built Google to what it is today?
Will webmasters start to block google as some big news organisations have?
I think that is coming pretty soon. Maybe not next year or the year after but it is coming. Web masters are falling out of love with Google as Google is biting the hand that feeds it.
|Google wants to keep the user on Google. Google is looking at niches and aggregating the information and then calling it Google knowledge. Google has no knowledge beyond scrapping web sites and building massive data centers. Google knowledge is actually "YOUR KNOWLEDGE". |
|How much of a second brain does it take to mashup content and display it on your site? |
half of us think like this, that google is just a big advertiser/affiliate mashing-up other people's content for their own profit, and the other half thinks like google's PR people, that they are creating an amazing computer that learns as it goes along and somehow knows what the user want's before they do
|brotherhood of LAN|
I think the discussions tend to circle around the WIIFM principle to be fair.
It could be considered that the 'organic SERPs' are OT to this anyway.
i dont anything exciting in that article. take this little bit as an example...
|Today, when you enter a search term into Google, the company kicks off two separate but parallel searches. One runs against the traditional keyword-based Web index, bringing back matches that are ranked by statistical relevance—the familiar “ten blue links.” The other search runs against a much newer database of named entities and relationships. |
Type in the query “Philadelphia,” and this second search will produce a new “knowledge panel” in the right-hand margin of the results page, complete with a map and other basic facts about the city William Penn founded. (Hedging its bets, however, Google will also include a thumbnail of the movie poster from the 1993 Tom Hanks film Philadelphia.) To use Google’s own description, the new database helps the search engine understand “things, not strings.”
This second brain is called the Knowledge Graph.
to a cynic like me, i would look at that and just say this: when people search for "philadelphia", most of them end up clicking on links to do with the city, rather than the movie. google knows this, and that is why they put extra info about the city next to the serps.
that is all that is happening. there's nothing special about it. if a word has more than one meaning, google just measures which one gets the most clicks and sticks some extra info about it at the side.
all this talk about "second brains" is just a load of PR nonsense to try and get people talking about
|In essence, Google’s engineers are building toward a future when the company’s famous “I’m Feeling Lucky” option is all you need, and the search engine returns the right result the first time, every time. |
All that needs to be said in almost every one of these articles I read. Thanks for posting, I love these.
Google is an advertising company Period They need places to display ads Period They need people to click ads Period They are in the business of distraction Period The more distracting links that can be populated around a searcher's original related query, the more opportunity to click on something else that may lead to a page with more ads on it. It's not complicated to figure out.
|The knowledge base Metaweb built is called Freebase, and it’s still in operation today. It’s a collaborative database—technically, a semantic graph—that grows through the contributions of volunteers, who carefully specify the properties of each new entity and how it fits into existing knowledge categories. |
Another prime example of google exploiting the open source and volunteer communities. But beyond that obvious statement it also indicates that they are not having much success building this knowledge graph through an algorithm so in desperation they go out and look for yet another pool of talented individuals and buy them and claim the innovation under the google banner. They closed their innovative department didn't they? Labs.
Clearly they recognized the value of Freebase but how did Freebase come to be what it was? Through human input.
|While Freebase is now hosted by Google, it’s still open to submissions from anyone, and the information in it can be freely reused under a Creative Commons license. |
Indeed. One of the wealthiest companies on the planet has no moral objections to free slave labour for their profit.
|But Giannandrea is careful to point out that Metaweb wasn’t trying to build an AI system. “We explicitly avoided hard problems about reasoning or complicated logic structures,” he says. “We just wanted to build a big enough data set that it could be useful. |
I don't know much about the individual quoted, or the Metaweb company but they at least seem to have enough common sense left in them to know that organizing human knowledge requires a degree of reasoning, something a computer will never be able to do.
For the sake of debate lets assume that they do succeed in putting together whatever they are trying to accomplish here, what then? What is it really?
A knowledge graph is based on HISTORY. At the speed of the Internet history is old news. It is only worth its weight for comparison purposes not for bringing important new insights to light quickly as they are born in people's minds. For that, I'm realizing, there's #hashtags in Twitter. People push ideas, not on me, but just out there. I can search for unfolding ideas long before any search engine becomes "aware" of it.
Just ask any newspaper company these days how much they have suffered since the birth of the Internet. So really what is a knowledge graph except anything more than glorified front page headlines of a newspaper?
So many people are seriously loosing their ability to think for themselves. To many poor fish are gobbling this stuff up hook line and sinker.
A second brain, ya, they are eating ours.
Isn't this actually the de-evolution of search? Ignoring the web ecosystem in order to put forth a google version of "knowledge"?
I am seeing a decline in traffic. It could have several reasons of course, but one thing I noticed lately is that they pushed some knowledge in the SERPS for my keywords. Looking at Google Analytics' Search Engine Optimization graph, I saw that "average position in serp" stayed the same, "numbers of impressions in serp" stayed the same... but the CTR dropped from 25% to 12%...
|The panel may include factoids |
<fe> If they do say so themselves. </fe>
:: remaining 90% of post snipped due to utter futility and pointlessness ::
|something a computer will never be able to do |
I seem to recall people saying computers would never play chess well. Are we now better off with arbitrage programs running in major financial centers?
We are living in times that have never seen a parallel in the history of mankind, I find it fascinating. There's good, there's bad, but it's really interesting to watch it unfold.
Guess Facebook must have done something right as the old Star Trek PR spoofery is being used again by Google. Microsoft also demoed a speech interface/real-time translator a while ago that could have some implications. I think that people still want the option to Search rather than being simply given what Google thinks they want and this is going to cost Google market share.