| 10:37 pm on Feb 17, 2006 (gmt 0)|
So, in five years, we won't have GoogleGuy talking in these forums, it'll be GoogleAI.
The idea that an AI needs vast amounts of data is nonsense. Natural intelligence is based on understanding not knowledge - that's a mistake often made by people that lack intelligence themselves. So if vast quantities of data are not required to build an AI, until such time as a working AI exists, the data is worthless (to an AI project).
Of course, all data might be useful to a fully functional AI but software engineers are nowhere near to developing an AI even though modern computer hardware could undoubtedly perform all the required tasks.
How might an AI be developed?
1) Start from the beginning - current mainstream languages are unlikely to suffice.
2) Define intelligence in some meaningful way - I consider it to be the ability to solve new problems.
3) Devise an evolutionary, self-modifying program that can test its own intelligence based on ever more difficult and diverse problems.
4) Run the program on at least a thousand computers each with different seeding.
5) Try to create some sort of mating system by which two (or more) programs merge into one, initially smaller program (i.e. have children).
6) Be prepared for failure.
| 12:59 am on Feb 18, 2006 (gmt 0)|
| 7:57 am on Feb 18, 2006 (gmt 0)|
It'll never pass the Turing Test.
Who's going to be fooled into thinking that a superfast superintelligence that has read every book in existence is a human being? :-)
| 12:27 pm on Feb 18, 2006 (gmt 0)|
Before it was achieved, the ability to play chess was considered to be a test for AI. Now we have Deep Blue that can beat the world champion (perhaps by bending the agreed rules on tweaking between games) but it isn't considered intelligent.
| 2:53 pm on Feb 18, 2006 (gmt 0)|
It would make perfect sense for Google to get into AI. The problem with searching on the web is that there's such a vast amount of information out there that just entering a bunch of keywords often doesn't bring up the information you want. In my experience this was different when Google started -- I was often amazed then how Google seemed to "read my mind" and take me straight to what I was after.
Now the web's bigger, and spammier, and it's getting to the point where we need an intelligence that we can turn to that understands what our interests are and understands natural language queries. Without that we're just going to drown in a sea of data.
Perhaps a valuable function of a Google AI would be to talk to you in order to find out exactly what you're looking for so that you can refine your search. This could be an optional feature, so that if you're trawling through page after page of results you can click on a link saying "Having trouble finding what you're looking for" and get some natural language help. Of course it could be that some of the arcane data I'm looking for just isn't out there!
| 3:00 pm on Feb 18, 2006 (gmt 0)|
Not about the turing test, might have more to do with understanding context, translations etc.
You might want to look up the references about google and UN translations.
| 3:40 pm on Feb 18, 2006 (gmt 0)|
But give a look at <the Whois records for ownership>
[edited by: tedster at 8:31 pm (utc) on Feb. 18, 2006]
| 6:18 pm on Feb 18, 2006 (gmt 0)|
Even more interesting.
I have a domain that includes Google in the name. And I couldn't believe it was available when I bought it. There are a lot of these domains out there but if Google ever needs or wants them I'm sure it will be able to get them reassigned without much trouble.
| 5:36 pm on Feb 19, 2006 (gmt 0)|
I was thinking along the lines of google's AI developing out of their search engine.
They are in the business of analysing and organising content. Perhaps refinements to the techniques of analysing content - using the interrelations between words - could lead to a system capable of making a sophisticated response to a natural language query. However, whether this kind of system could be called intelligent or not is another matter. Obviously we're still way off
| 6:55 pm on Feb 19, 2006 (gmt 0)|
Learning from data and having enough and the right data to learn from.
Google has already developed a machine learning model which has been running unsupervised for number of cpu years, essentially finding correllations between words and segmenting them into clusters of related words. Interestingly enough it wasn't seeded with any data.
"They are in the business of analysing and organising content. Perhaps refinements to the techniques of analysing content - using the interrelations between words - could lead to a system capable of making a sophisticated response to a natural language query."
I think Google has a fair amount of disdain for the quality of information and data found on the web, and the information garnered from the book scanning AI would be used ultimately to refine the serps. Whether Google print is a success or not, IMO, the information they will get is a goldmine and will distinguish them from all the other SEs. Hence why they have pushed ahead with this despite all the cocerns of publishers.
"Perhaps a valuable function of a Google AI would be to talk to you in order to find out exactly what you're looking for so that you can refine your search."
It's already happened with broad queries now returning 3 sets of refined queries for the broader term (See results for: refined query)