I have trouble believing any reputable/large search engine would ignore robots.txt - if this did happen that spider would find itself being physically blocked from the majority of managed websites sooner or later.
It is not unreasonable to suggest that if a search engine spider can't gain access to lots of websites this would lead to a less useful search engine, once they are in this situation then there are only so many options;
1) Fix your search engine to work with robots.txt 2) Leave the search engine business 3) Carry on and pretend that everything is fine
Obvious it's a lot easier to build a working spider (or at least learn from webmaster comments that describe where your spider is failing) than it would be to only fix your spider when lots of sites have blocked it and your business is failing as a result.
People are generally very tolerant of most things SE spiders do - this does not include ignoring robots.txt as this is a very clear cut thing as it protects both the website and the spider.
There are several spiders that do not heed robots.txt. They don't pull it at all, or totally ignore it. Most of these are email harvesters, downloading agents, spam bots, and leechware - however. These aren't legitimate bots/spiders that will help you in the real world. So, adding them to robots.txt is generally a waste if they are a confirmed abuser.
As a rule - most bots with variations of "rip", "siphon", "harvest", "download", etc. etc. are going to disregard robots.txt and do as they please. It's probably more constructive to concentrate on the spiders you want to visit your site and instruct which pages and directories to parse through robots.txt than to try to ban rude bots through it.
If you look through the posts at WebmasterWorld you'll see lots of people report whether a bot disregarded robots.txt and form your own conclusions about who/why/what to include or exclude in your robots.txt file. Also, many of the current legitimate spiders can be found at searchengineworld.