*Lately I've taken to using robots.txt in very specialised cases to block indexing of some legitimate duplicate content on deprecated URLs/mirrors by SOME bots mainly to save my bandwidth and the SEs'.
And that is exactly what robots.txt is for -- To save bandwidth and control cooperative robots' crawling of your site.
Along with that comes an improvement in the usability/validity of your log files and stats, since they won't be full of 404-Not Found errors resulting from robots trying to fetch the customary robots.txt file.
You don't *have* to have a robots.txt file, but even if you don't need the robots-control facility it provides, adding one that's either blank, or that contains
User-agent: * Disallow:
is a very good idea, if just to keep your access log and error log clean, and avoid skewing your stats with all those errors from attempted robots.txt fetches.