1script - 4:37 pm on Sep 25, 2012 (gmt 0)
This sounds counter-productive or at the very least redundant. There should really be no canonical version of a search page - it should all be no-indexed. Otherwise someone can link to a page with a URL like www.example.com/search.php?q=VERY_BAD_WORD and there you have it - your search page now has an inbound link with VERY_BAD_WORD in the anchor.
I would add the rel=canonical tag to the results pages to just point to the main search page
Here is another scenario which has already killed one of my sites (few years ago but I'm sure it's still going to be detrimental now): You have a search page on the site that accepts GET requests hence has a different URL for each search. Someone (competitor or just a curious hacker) can link to a "not found" search page using any number of combination of anchors and URLs (just changing q=xyz at the end of the URL) and there you go, your site has just picked up plenty of new pages, all duplicates.
In my case the "helpful" search script also added a part saying "We could not find xyz , would you like to check other searches that we think are relevant?" And linked to a couple of more search pages. That eventually has snowballed to 2M+ nearly identical pages that Google still (5 years later) thinks that my site has. The site has had what seems like EVERY penalty Google has ever devised (-950, -800, -50, you name it) and is still lingering mostly on pages 2-3-5 despite having been on positions 1-3 for years before the incident.
Anyway, a search page has really no information worthy of being indexed by itself, so it needs to be no-indexed and also disallowed in robots.txt to conserve your crawling budget. Just make sure you implement no-index before disallowing it in robots so the bots have a chance to read the html and see the no-index tag.