SteveWh - 11:46 am on Jun 22, 2010 (gmt 0)
The following seem to be some kind of exploit:
Unless there was more code in the request than you posted, those don't have anything in them that could be exploits.
Google and Yahoo both expect sites to return 404 for nonexistent pages. Yahoo tests for it explicitly with its SlurpConfirm404 crawler by requesting pages that it knows are not on your site. If you return a 200 response or redirect to another page that returns a 200, it is considered an attempt to get more pages indexed than you actually have. Handling 404's incorrectly also has the potential of getting you hit with a duplicate content penalty, if the site sends visitors to the same landing page regardless of what nonexistent page was requested.
126.96.36.199 - - [21/Jun/2010:20:59:44 +0100] "GET /%3Csc%3Cscript%20src=http://example.com/x.js%3E%3C/script%3E HTTP/1.0" 500 6029 "-" "Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; ht tp:/ / help.yahoo.com/help/us/ysearch/slurp)" 0 widgetexample.com "-" "-"
Too bad you can't post what example.com was. That is strange code, but the question is whether example.com was an actually malicious site.
In view of the Google Safe Browsing malicious website database, and Yahoo's equivalent one, it wouldn't surprise me if search engines start crawling sites testing for the existence of vulnerabilities like XSS. I don't know if any are currently doing it, though.
...Others redirect if a browser is detected (keep visitors on the site) but return a 404 if it's a bot (to please google, really).
Google calls that cloaking (giving different results to search engines than to human visitors), and their quality guidelines warn against doing it.
The correct way to handle a nonexistent page is to send a 404 status code. On the 404 page, you can put links to wherever else you want, but it is important not to use any method to redirect automatically.