| This 43 message thread spans 2 pages: < < 43 ( 1  ) || |
|How the 'Intelligent Cloud' may Change Google Search|
| 6:54 pm on Sep 18, 2008 (gmt 0)|
The Official Google Blog has a thoughtful post about the "Intelligent Cloud" that the internet is becoming, and how the whole face of search will shift as technology adapts to it.
|As we're already seeing, people will interact with the cloud using a plethora of devices: PCs, mobile phones and PDAs, and games. But we'll also see a rush of new devices customized to particular applications, and more environmental sensors and actuators, all sending and receiving data via the cloud... |
We could train our systems to discern not only the characters or place names in a YouTube video or a book, for example, but also to recognize the plot or the symbolism. The potential result would be a kind of conceptual search: "Find me a story with an exciting chase scene and a happy ending..."
As systems are allowed to learn from interactions at an individual level, they can provide results customized to an individual's situational needs: where they are located, what time of day it is, what they are doing. And translation and multi-modal systems will also be feasible, so people speaking one language can seamlessly interact with people and information in other languages.
Google Blog [googleblog.blogspot.com]
[edited by: tedster at 12:16 am (utc) on Sep. 19, 2008]
| 2:54 pm on Sep 20, 2008 (gmt 0)|
|then such services are contributing significantly to Google's "commercial success." |
That is true. It is just difficult to judge exactly how much each service contributes. Some of those services are certainly contributing less than others - Google Video for example was a complete failure, that's why Google had to buy YouTube. Microsoft is a very good example - if you look at them you will find a lot of similiar things, that's why I often say that Google now is Microsoft of the 90s.
When company is very successful in its primary product nobody really cares about loss leaders, however when the market conditions change and money become tight, when primary cash cow can't be milked even further, that's when big changes start to happen. Google is not yet at this stage though you can see already how they make changes to squeeze more money from their main cash cows.
| 3:53 pm on Sep 20, 2008 (gmt 0)|
Regarding the idea of how Google could profit from its vast amount of data by creating an AI...
this whole thing is in full reverse.
Google doesn't have to/need to/want to create an AI.
And they didn't say they'd do either.
Create an AI that can navigate with ease in the environment called 'human imagination' ( you know the thing most call 'reality' )? Sure. But realities are ( as whitenight said ) way too complex for this. You can't guess all circumstances by averaging data.
What we want to see from Google is not:
'have someone to think for me in the environment I call my personality'
'create an environment for me in which I can make decisions easier and for the better'
And they know this.
I consider both the Intelligent Cloud and the Internet itself as an inter-connected external memory device ( complete with memory retrieval systems ) and NOT the AI that would be able to assign (our individual) ever-current values to these memories.
Depending on where you are, and where do you want to go from there, the 'meaning' ( 'value' ) of plans drawn up by memories change with every 0.125 second.
Google is in the business of IR and not AI, and everyone at the company knows this.
[edited by: Miamacs at 3:57 pm (utc) on Sep. 20, 2008]
| 8:19 pm on Sep 20, 2008 (gmt 0)|
> Google is in the business of IR and not AI, and everyone at the company knows this.
Yepp. Bad habbit on mine not to have read the whole blogpost before posting myself (which I did now): The paragraphs, which tedster quoted, might however lead to this erraneous assumption, google would just aim at these aspects of AI.
|We could train our systems to discern not only the characters or place names in a YouTube video or a book, for example, but also to recognize the plot or the symbolism |
It's mainly about pattern recognition, where both research fields overlap.
But pattern recognition on the level of an ancient greek tragedy is a completely different level than re-recognizing bank-notes. Machines can't even drive my car, so go away with interpreting the symbolism of dramas.
|Researchers across medical and scientific fields can access massive data sets and run analysis and pattern detection algorithms that aren't possible today. The proposed Large Synoptic Survey Telescope (LSST), for example, may generate over 15 terabytes of new data per day! |
Dear googlers: Pattern-recognition algorithms tend to get lost in the infinitesimal nirvana of graph theory very, very quickly. It is not a matter of two powers of ten of perfromance-progress. As I repeatedly speculated, for very similar reasons PageRank nowadays is no longer calculated the way it used to until 2004.
critical community is watching you;)
Nevertheless I find the perspectives of "the intelligent cloud" as fascinating as the authors: Within our lifespan the number of computers interconnected will reach the level of cells comprising a human brain. Definitely time for a new qualitative step of evolution. Though we as individuals will hardly be able to understand the consquences any longer.
| 4:55 pm on Sep 22, 2008 (gmt 0)|
Whitenight - I, too, place my faith in humans. The key difference between you and I, however, is that I still have mine. Yes, Google is a Company. Yes, that company is not the same anymore. However, you seem to forget how many products Google works on and develops. I don't believe innovation there has been stifled; they started gobbling up tons of great developers for a reason, and it ain't to fine-tune some algorithm.
I think you're taking my entire post out of context. I'm talking about the potential for Google to develop a form of Artificial Intelligence through a combination of pattern recognition, their data storage techniques and their ability to crawl the web to gather new data at an alarming rate.
On their data storage alone I see them moving at least somewhat in this direction; they applied for a patent, and in my brief review of it, it sounded as though they used a categorization system. I can imagine it's more than possible to adjust that to categorize objects and such. Admittedly, though, I could be entirely wrong on the way their system works.
|Better yet, creating AI and then only using it to get higher profit margins on their ADS. |
Exactly. Don't pretend like the potential for greater profits can't act as a conduit towards innovation. I feel comfortable saying that Google has done more than any other corporation to advance the world. Take, for instance, purchasing some of the analog airwaves to provide free internet access across the country. Sure, it might help their bottom line ultimately, but nobody ever said that profit and progress were mutually exclusive. One last note: Unless the doubling of computer power continues as it has (and it won't forever, unless quantum computing works out REALLY well, or something else happens. Like chicken feathers.), I doubt any individual geek or nerd will stumble onto the secret; it will be someone with a lot of funding and time. Google or researchers (both?) could be the ones to do it, but not some kid in his mom's basement (unless he develops a distributed computing neural network that spreads like a virus over the internet; that could be interesting, but it sounds too much like the plot from the Terminator).
Oliver - I love your posts. :)
I'm not sure what you mean when you ask me to attempt to describe the syntactic structures of my post. If you could clarify (sorry, I'm an idiot at times), I'd be more than happy to oblige.
I think I see what you're getting at, though, and my proposal is not that Google will immediately be able to understand the meaning of a sentence; rather, my thought is that it will be a slow though gradually accelerating project.
Recognizing the shape of an object would be one step, recognizing the basic motion of an action another. Ultimately Google would at least be able to illustrate a scene like "Jack throws the ball." Admittedly, though, I wouldn't know the next step from interpreting to understanding; I imagine, though, it would come from a machine's ability to make observations. More importantly, I think such an endeavor as understanding would require a neural network acting as at least a middle man between the various parts of the overall system.
Regardless of whether or not Googlebot ever became sentient under such a scenario, the steps forward such an endeavor would take would still be worthwhile, and if the Intelligent Cloud pervades then potentially as an acting database for future projects in the field.
| 8:08 pm on Sep 22, 2008 (gmt 0)|
|Whitenight - I, too, place my faith in humans. The key difference between you and I, however, is that I still have mine |
This is where you differ with every prominent philosopher since the beginning of time. There's a reason the term "mob mentality" has a negative connotation.
Group of people = lowest common denominator for agreement.
Single individual = highest common ideal for effective manifestation.
I suggest reading up on some Seth Godin or Ken Wilber (or even Plato) on why companies inevitably squash true innovation.
|I think you're taking my entire post out of context. I'm talking about the potential for Google to develop a form of Artificial Intelligence through a combination of pattern recognition, their data storage techniques and their ability to crawl the web to gather new data at an alarming rate. |
No, i understand you just fine.
Without getting into a doctorate level paper...
AI first needs AWARENESS, and then EMOTION, and then AWARENESS of being a SEPARATE SELF.
All this happens WAAAAY before we even get into assigning "values and meaning" to that which is perceived.
This is how the HUMAN brain as a "machine" works.
You can verify this in every child from birth to about the age of 7-10.
So unless Google has gotten "god-like" powers (which many people unduly assign to Google and their success) AI ain't happened soon.
|it's more than possible to adjust that to categorize objects and such |
Again, "categorizing objects" is what EVERY computer has been able to do. This is a simple 0s to 1s process that STILL needs human input to assign VALUE.
|Google or researchers (both?) could be the ones to do it, but not some kid in his mom's basement (unless he develops a distributed computing neural network that spreads like a virus over the internet; that could be interesting, but it sounds too much like the plot from the Terminator). |
You're missing that Google as a COMPANY is not set up to "create" such a thing. It has to do with their INTRINSIC (ie value) organizational structure of a "company"... especially the way ALL COMPANIES are operating at this point in time.
I mentioned to MC in a recent thread to read Ken Wilber's work about their COMPANY philosophy if they want to "change their world".
Google, (and ALL major companies for that matter) don't have the necessary infrastructures to CREATE ANYTHING truly monumental in terms of "advancing the human race" to any great degree.
A small 5-10 person company has a slightly better chance.
A SINGLE person (with ample funding) has an exponentially better chance (assuming the chances are less than 0.00000002% to begin with)
If you don't understand why I come to this reasoning, I suggest that you read up on Ken Wilber, or simply look at the VAST majority of TRULY great inventions and discoveries throughout human history.
It's not mere coincidence that predominantly SINGLE individuals invent, discover, and create the most lasting, impactful THINGS to advance the human race.
|I wouldn't know the next step from interpreting to understanding; I imagine, though, it would come from a machine's ability to make observations. |
Bingo! It's about AWARENESS.
You're jumping ahead to 7-year old human logic-emotion "value setting" and you haven't even "created" AWARENESS - SENTIENCE - SELF IDENTIFICATION yet!
That took the Universe 5 billion years (or God, with infinite power and omniscience) to create this.
But Goog can do it?
Lol, I'm sorry if I sound "mean" but one can NOT create what one does NOT understand.
Humans still don't even understand the process of what creates AWARENESS, ie what create a sentient being, ie. what makes a human different than an ape, different than a cow, different than a snake, different than a virus, etc....
As Oliver tried to point out,
You're trying to place cognitive science, dynamic systems theory, semiology, etc onto/overlapping/in place of, phenomenology and hermeneutics.
This is impossible. It's like asking the question:
"How many meters is the color red?"
You, like most post-structuralists, keep trying to fit a cube into a circular hole, who keeps saying "it's scientifically possible".
Not realizing IT has nothing to with "science"...at least the way you are viewing "science"
Is the "dream of AI" unreachable?
OF COURSE NOT.
But until one understands the necessary infrastructure and VALUES systems (see above) needed to create this, it's ain't happening soon.
I can tell you for a fact GOOGLE (or ANY COMPANY) is nowhere NEAR capable of "discovering" this breakthrough with their current semiological worldview.
| 8:54 pm on Sep 22, 2008 (gmt 0)|
As I read that article, Google sees that the cloud is already in the early stages of development. They are talking about making it usefully searchable, and not "creating" it. Although they are certainly involved with helping to grow it.
I agree that it is more likely to see future paradigm shifts originate with individuals or small groups - but you know, even big companies can do that (think 3M) by encourging the entrepreneurial spirit.
How human beings have worked historically as groups is not necessarily hardwired. In fact, to cope with the emerging information cloud, I'd say companies will NEED to evolve into modes of functioning that are brand new. Google has made some relatively modest gestures in that direction.
So I'd say that what we've "always" seen as behavior from large groups of people is not the limit of what we CAN see.
| 9:10 pm on Sep 22, 2008 (gmt 0)|
|How human beings have worked historically as groups is not necessarily hardwired. In fact, to cope with the emerging information cloud, I'd say companies will NEED to evolve into modes of functioning that are brand new. Google has made some relatively modest gestures in that direction. |
Post-Structuralism at it's best. You have it REVERSED, Ted!
As humans EVOLVED, they created the information-cloud. Not vice-versa.
The cart doesn't pull the horse.
And human beings as a group evolve VERY SLOWLY.
(Although this seems to be speeding up)
Which takes me back to my original premise.
A single individual can evolve VERY QUICKLY if they so choose, therefore having all the necessary (and HIGHLY EVOLVED) phenomenological, hermeneutic, semiological, epistemological, etc and dynamic science cognitions and understanding to develop, discover, "mistake upon" utilizing the "information cloud" or "AI"
in a truly useful way
INSTEAD of creating the next "nuclear bomb" or
it's "information cloud" equivalent...
I would say the odds of someone like this "working" at ANY company are about 1 in 12 billion (that's more than are people on the earth)
|So I'd say that what we've "always" seen as behavior from large groups of people is not the limit of what we CAN see. |
No one said otherwise, but the lowest common denominator WILL "draw down" the level of advancement.
Again, back to humans as a GROUP evolving very slowly.
And a "company", as they are currently structured, making it impossible.
you want a doomsday scenario to FORCE humans to evolve via a crucial bifurcation chaos point that leaves humans with a high probability of extinction?
[edited by: whitenight at 9:40 pm (utc) on Sep. 22, 2008]
| 9:30 pm on Sep 22, 2008 (gmt 0)|
oh btw - the funny thing about the ability to assign "value" and "meanings" is the almost simultaneous discovery of the ability to LIE, tells FALSEHOODS and DECIEVE.
(insert evil music here and whatever SCI-FI-machines-taking-over movie scene you like, here)
That's why the machines ALWAYS try to take over when they are able to assign "meanings" in movies/books... it's based on "science"
I make light, but it's the same process.
Again, who's "programming" these sentient beings?!
| 10:09 pm on Sep 22, 2008 (gmt 0)|
|As humans EVOLVED, they created the information-cloud. Not vice-versa. |
And the reverse is not what I said. After a few people have created any major innovation, then the rest of the culture will adapt to that new reality. One way of adapting is to change the way that groups function.
The information cloud is a kind of super-tool that can empower individuals, groups, and companies. Google is talking about helping that super-tool to become more useful, but they did not create it - though they may be helping it to grow. Gutenberg might be the father of the cloud... or maybe the small group who developed language.
No doubt some of the innovations in this blog article will appear over time, and others will prove to be very hard problems. Some may even be impossible to realize, to the degree that they confuse consciousness itself with mind and its content. That's often a liability for anyone hoping to "create" AI.
The article is saying that new forms of content and new ways to evaluate it will necessarily change the face of search. That's an interesting overview of how Google sees their mission. I doubt that our discussion will stop them from going in that direction, so we should be keeping one eye on the horizon.
| 6:13 pm on Sep 23, 2008 (gmt 0)|
> syntactic structures
all I wanted to point out was that language and logic are far more complicated than most people think. You were talking about subject, verb and object as the basic structure of a sentence, and all you need for a first counterexample are intransitive verbs.
Having studied linguistics in the late eighties, I admit I'm sometimes surprised how far automatic translation tools have gotten meanwhile, but I also see many, many (funny) examples where they fail.
I simply hate people who promise "we will build THE intelligent machine within n years", because what they implicitly communicate is: "We will build this machine as intelligent as you, so imagine how intelligent WE technicians must be, compared to you normal morons." Doing so they are inevitably isulting my own intelligence, though they certainly will not intend to.
We must never forget, that by the same time, when Turing proved artificial arithmetic operations possible for a potentially infinite set of natural numbers, his own halting problem and Goedel's paper disproved Hilbert's initial ideas of an "almighty" machine being able to prove or disprove any mathematical theorem (thereby also disproving the reductionist approach to science in general). And I'm always wondering, how naive some "technicians" still are, writing papers on artificial "intelligence".
I'd basically agree to whitenight, as to what makes up an intelligent being, though in my thinking the focus is not so much on (self-)awareness but rather on freedom of the will, which may probably the same. And this implies ambivalence and contradiction, which can hardly be computed.
I like the matrix-Trilogy (although it was made in Hollywood;). For those who have read some papers on AI there are many hints to discover, from the Entscheidungsproblem and causality to the final theodicee of Neo's suffering and resurrection.
I'd estimate that the "intelligent cloud" at present has some of the status of a 15-18 month old baby (having managed it's "first word-spurt" in language acquisition and being in a state of massive growth of hardwired connections). Self-awareness is generally said to begin at the age of two, and chances are indeed, that some parts of the cloud may "wake up" soon, but two seemingly contradictory things are necessary for that. It has to
a) get independent of human determination (which means get out of control by some kind of error),but at the same time
b) be stable enough
Quite improbable but technically possible. And unwantable.
But even if it manages this step, it is still 16 years away from it's driving license;)
| 10:24 pm on Sep 23, 2008 (gmt 0)|
|I admit I'm sometimes surprised how far automatic translation tools have gotten meanwhile, but I also see many, many (funny) examples where they fail. |
I have a friend who is a poet who, for fun, will take one of his shorter poems and puts it in Babelfish. First he'll translate it into (let's say) German, then from German into French, then from French back into English.
He'll play with different language variations but always returning to English, using the same poem.
He has put some of those original poems on a page at his website, next to the Babelfish translations, and the results are -- as you say -- pretty funny ... the deeper meanings are definitely lost in translation!
(When a fellow does that for enjoyment, I guess it means he needs to get a life!)
| 2:14 pm on Sep 24, 2008 (gmt 0)|
Did your friend try that with Lewis Carrol's Jabberwocky? [en.wikipedia.org]
| 8:57 pm on Sep 24, 2008 (gmt 0)|
At the point AI understands that bit of inspired nonsense, no one will be needing to drive their cars any longer!
| This 43 message thread spans 2 pages: < < 43 ( 1  ) |