Computer vision is a hard problem and Google Goggles is still a Labs product. It works well for things such as landmarks, logos and the covers of books. However, it doesn’t yet work for some things you might want to try like animals, plants or food.
This is understandable. Most object recognition software needs some sort of data to put it in a context, as in GPS data and/or a landmark or object name, and then it compares data points... things like the corners of polygons, with existing data on file for that landmark or object. Over time, I assume that Google adds data to the collection of data about a place or object and a more detailed model is constructed.
Google would also have meta data about the object or place itself on file... as well as data about the surrounding location. Think 'Place Pages back end'.
With books and DVDs, I'm assuming there's OCR capability of some sort to identify a title, and then perhaps Google jumps to meta data and previous images, most likely before they try computationally intensive image-matching efforts. I'm sure Google has identified optimal strategies, and/or that these strategies are evolving over time.
Organisms that move of course present a much more difficult identification and modeling problem, as they have orientation and physical variations, have limbs that move, may even have fur or hair which can present quite a challenge, and they aren't rectilinear.
I would think, btw, that Google Goggles will eventually be a major part of Google's mobile search strategy going forward, for both places and products. A phone-based camera interface is a natural.
See previous discussion on Google Goggles here.... Google Goggles - search by submitting a photo http://www.webmasterworld.com/google/4039028.htm