Forum Moderators: Robert Charlton & goodroi
A press release at UCSD mentions this:
[jacobsschool.ucsd.edu...]
The process allows the software to be "trained" by humans. People would tag a sample set of photos with words identifying the content. The sample set of tagged images could then be used as a basis for identifying objects automatically within huge databases of images which are unlabeled.
Heretofore, I believe that Google's image search service relied primarily upon metadata associated with images in order to perform keyword relevancy searches and rankings. For instance, when Google collects images from webpages, they'll look at signals like ALT text for the image, caption descriptions below the image, image file names, and other data like words found in text near the image.
Their involvement with this research would indicate that they could be intending to evolve their search service by setting up these automated methods for figuring out what keywords should be associated with images, particularly when they don't have other metadata indicators.