"We're starting to see things [in search] that appear intelligent but actually aren't semantically intelligent. So, for example, if you type GM into Google, you'll probably get General Motors. But if you type GM foods, we actually give you pages about genetically modified foods and General Mills [the US food company that was a key player in the GM debate]."
But there's a potential third form of search, she explains, which uses the sensors built into devices around us. "I think that some of the smartphones are doing a lot of the work for us: by having cameras they already have eyes; by having GPS they know where they are; by having things like accelerometers they know how you're holding them."
I picked up on this.
Which leads us to real-time search – a space where Twitter, in particular, has pulled ahead of the bigger company. Although it's emphatically unsaid, it's clear from studying the reactions of Mayer – and other senior people at Google – that the little company has unsettled its bigger, broader rival.
Expanding on the "eyes" part - Marissa's comments about the challenges of image recognition are something:
I do feel for the image recognition people because their problem has become significantly harder in the internet age. We're not getting closer to a solution. tThe solution just moves further away.
I'm personally not comfortable with the idea that some smartphone catches an image of me somewhere and then real-time search lets the world know my current latitude and longitude. I may have to learn to live with it some day, but I hope that day is far, far away.