If you don't have a robots.txt file, then all crawlers will deem themselves allowed to fetch all pages and resources of your site.
However, you will find that once you move into the finer points of refining your site to attract more (and more appropriate) visitors, the fact that your server access logs and "Website statistics" reports are filled with 404-Not Found errors caused by robots trying to fetch robots.txt will become a nuisance.
You can always upload a blank file called robots.txt to avoid this.
In addition most Webmasters would Disallow robots from fetching any page that triggers an action, such as "voting" or sending an e-mail. Otherwise, you might find that between Googlebot and all the others, your "vote count" would be seriously skewed, and your in-box would be quite full. Could be worse -- They might get into your shopping cart and deplete your inventory in a matter of hours. ;)
Be aware that a Disallow in robots.txt is a "request" -- Malicious and incompetent robots can ignore it at will. For those, a bit of server-side code to actually block access is needed.
[edited by: jdMorgan at 3:34 am (utc) on Sep. 20, 2008]