Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.
Google is showing my own sites internal search pages in the SERPS as well but I've had them blocked for years.
With big sites like Amazon and Ebay its easier just to search vs. digging though their navigation. I wonder if something changed where they feel some sites search terms do indeed return valuable results and allows it to happen.
Googlebot is an automated computer program that seeks out data. If a search result page returns a 200 code I would not be surprised if it is crawled by Googlebot. Google has a history of penalizing big sites that did not follow the guidelines. This just might be a glitch.