Some spiders read robots.txt, some read the html meta robots tags in the pages, and some read both. There are also "bad" spiders which don't read either, and some that read either or both, but ignore whatever they read. Thus, a lot of energy is spent banning these bad spiders from sites, since their usual purpose is to collect e-mail addresses for unsolicited commercial e-mail, or to steal site content.
For the good spiders, as long as your on-page html meta robots tags agree with what you have specified in robots.txt, you should be OK. That is, a page which you have Disallowed in robots.txt should contain a NOINDEX,NOFOLLOW meta robots tag, and one that is allowed by robots.txt should contain either INDEX,FOLLOW or INDEX,NOFOLLOW (according to whether any of the pages that page links to should be spidered).
Originally, the on-page meta robots tag was intended for people who wrote web pages, but did not have access to the administrative functions on the web server, i.e. robots.txt. The meta tag allowed them some control over spiders indexing their content.
If you do have access to robots.txt, the on-page meta robots tags are kind of redundant...
However, according to something I read somewhere (maybe here), Inktomi's default behaviour is INDEX,NOFOLLOW. As a result, I implement both robots.txt and the on-page meta robots tags to feed Inktomi the INDEX,FOLLOW directives.