You don't need a robots.txt from what you say. I would advise you don't have one just in case you make a mistake in it. That said if you really want one you could have a blank robots.txt file or one with just this:
User-agent: * Disallow:
While you say you don't want to exclude some things it is a good idea to exclude a load of bots that will come along and cause pain: You can use this sites robot.txt, delete any bots you do want to crawl and delete the end section which is site specific
You are right their are no postive values such as "Allow": These are two ways to get round it
To exclude all files except one
The easy way is to put all files to be disallowed into a separate directory, say "docs", and leave the one file in the level above this directory: User-agent: * Disallow: /~joe/docs/
Alternatively you can explicitly disallow all disallowed pages: User-agent: * Disallow: /~joe/private.html Disallow: /~joe/foo.html Disallow: /~joe/bar.html
"I simply wish to return a robots.txt to tell all bots especially googebot that they are welcome to spider my entire site, in preference to returning my custom 404 page." I added the minimal robots.txt file to my site just to avoid all the 404 messages in my error log. It serves no useful purpose.
There was time when search engines that couldn't find a robots text wouldn't spider ..last year the ATW had a problem with this ..lasted about six weeks ..I had one site up without a robots text ( not deliberately ..just forgot to write the *^$)* thing ..and had a sort of blindness every time I looked that meant I didn't notice that it was missing ) ATW came at least twice a day ..requested it ( yes eventually I looked at my logs!) couldn't find it ..went away emptyhanded so to speak ...
In case such things may happen elsewhere ...better to have than to have not