|Google Goggles - search by submitting a photo|
Google Goggles [news.cnet.com]
|Google announced the ability to perform an Internet search by submitting a photograph. |
The experimental search-by-sight feature, called Google Goggles, has a database of billions of images that informs its analysis of what's been uploaded, said Vic Gundotra, Google's vice president of engineering. It can recognize books, album covers, artwork, landmarks, places, logos, and more.
"It is our goal to be able to identify any image," he said. "It represents our earliest efforts in the field of computer vision. You can take a picture of an item, use that picture of whatever you take as the query."
However, the feature is still in Google Labs to deal with the "nascent nature of computer vision" and with the service's present shortcomings. "Google Goggles works well on certain types of objects in certain categories," he said.
I just watched a video about this just last week - where the rest of the Google team was testing it and decided it was too rough to release. Guess they changed their collective mind.
It is remarkably adventurous technology - going even beyond speech recognition or the "name that tune" search. Just think about the constant data/image collection effort that needs to go into it.
I saw this on the msnbc inside google show last week.
After reading this [nytimes.com...] I began wondering - How would I optimize for a photo based product search?
|But the service can also delight and amaze. It had no trouble recognizing an Ansel Adams photograph of Bridalveil Fall in Yosemite, returning search results for both the image and a book that used that image on its cover. It also correctly identified a BlackBerry handset, a Panasonic cordless phone and a Holmes air purifier. It stumbled with an Apple mouse, perhaps because there was a bright reflection on its plastic surface. |
I guess I would have to have lots of different angled product shots, each image properly ALT TEXT'd , and make sure that the data is in Froogle (or whatever they are calling it this week).
Anyone else have any thoughts on optimizing ecommerce search capabilities?
< moved from another location >
This feature has gotten me to wondering how, or if, this will create duplicate image issues?
Will photo owners now have to worry that scraped copies of their photos will now out rank them?
[edited by: Robert_Charlton at 10:50 pm (utc) on Jan. 18, 2010]
I guess that would be better expressed as "duplicate content issues".
Apparently, the feature can discriminate between subtly different images of the same subject.
The Google documentation, though, on this feature, is terrible, and it feels sidetracked by a push to sell Android/ Nexus phones...
Google goggles - labs
Use pictures to search the web
Mobile Help - Google Goggles [google.com]
What's not immediately clear to me from reading the above is whether submitting an image as a search query implies that it will be "published" in Google Images.
I was thinking about this in the sense that if Google applies this technology to the image index in general. Sort of like it tries to sort duplicate content in the regular index.
We've seen plenty of comments about scraped content out ranking the original.
Are they using Street View images to power this to start with? Seems like that would be likely for "location" type images.
But sorting subtle differences in images can be difficult even for the human eye at times.
The more I think about this the more questions I have.
See the discussion in the thread below (and original discussion I reference) about Google's Similar Images feature in image search. The feature uses pattern analysis and color analysis to search for similar images, based on light and dark composition, color patterns, etc....
Similar Images Moves Out Of Labs To Google Image Search
Note that there's also a current discussion in Supporters about a search engine that makes some amazingly fine distinctions between images, probably using technology analogous to Google's. I've posted the Similar Images reference in that thread too....
Find who's scraping your images
|Are they using Street View images to power this to start with? Seems like that would be likely for "location" type images. |
I hadn't thought about Street View as an image source. Google also has a large image database, probably continuously stitched together as much as possible with accompanying geo data.
And, for landmark and place queries, Google might use gps data from your phone or your images, if there's any to be had.
Ultimately, anything that correlates is probably going to be used.
Doesn't seem to be much interest in this feature and what it might mean for image owners.
Getting a question buried by having it merged into a dead thread probably doesn't help.
Perhaps interest will pick up if the duplicate image/content issue becomes problem as the feature rolls out.
ken_b, I'm missing the connection between this new Goggles feature and image scraping/ranking. Care to expand on your thinking here?
My concern is that if G applies this technology to the general index of images for image search, what impact might that have on image gallery publishers, for instance. Or any image owner for that matter.
G returns image serps ranked by what? The same standards they use to rank text results?
Let's say an image scraper comes along and scrapes my image of a blue widget.
Now with this new "search via image" feature some surfer snaps a photo of a blue widget and sends it off to get more info about it.
G then apparently sends the image data off to find a similar image somehow and comes back with the results.
Even if both copies of the image are in the serps which one shows up first on the image results, that of the actual image owner or that of the scraper?
Does the actual amount of related info on the destination page matter?
And when it comes to "subtle differences" it gets even more dicey. I've been at plenty of events standing in a row of people taking photos of the same widget at the same time from almost identical angles. Even I have a hard time telling my photo from taht taken by the guy standing on either side of me. I'm not convinced G can do any better at this time.
I'm just thinking that this is going to complicate life for image owners if, or when G applies this technology to the image search in general, and I can't imagine that they won't.
Think about the implications for all those images where on page captions/descriptions are missing or lacking in detail, let alone those where the "alt" info is empty. I don't see how G can resist applying this technology in light of those cases. They can make a pretty strong argument that the technology will make for a better image search.
At least it seems that way to me.