I'd love to see this implemented. It's an elegant approach, and computationally less demanding than other image stitching approaches that I've seen presented, which involve the meshing of a huge number of individual images.
The most impressive of those presentations, not coincidentally, were made by someone also connected with Microsoft... Blaise Agüera y Arcas, "the architect of Bing Maps at Microsoft", on two TED Talk videos. The concepts behind Photosynth video, in particular, I feel, have profound implications for all of search, not simply local imagery, and suggest why personalization and user data are of such interest to the various search engines. Note the use of the phrase "augmented reality" in the second video.
Blaise Aguera y Arcas demos Photosynth http
://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html
[
ted.com...]
Blaise Aguera y Arcas demos augmented-reality maps http
://www.ted.com/talks/blaise_aguera.html
[
ted.com...]
Google has been working with Google Goggles and Google Similar Images... in their current forms not nearly so impressive as what I've seen from Microsoft. I made some comments about these in the Google forum that, together with the above, hint how local image search and street views, together with GPS data from a camera or phone, might ultimately tie in....
Google Image Recognition http
://www.webmasterworld.com/google/4153195.htm
[
webmasterworld.com...]
Explanations of image recognition software (and image stitching software) that I've read suggest that an image can be broken down into polygons, and then matched with other images using "corners" of shapes defined by the software.
It works ok for specific things, major landmarks, anything with a barcode, etc...
Clearly, you need a starting point for matching. You'd otherwise have to crunch an impossible amount of data to find a match. The more data you have, also, the more refined and accurate the entire model becomes.