My website is example.com, and I have been developing it under beta.example.com
I had hoped search engines wouldn't find that, but they have, so now I need to hide it with robots.txt. But I'm really scared of putting robots.txt in the root of beta.foo.com it will screw up my main site.
As long as client requests for beta.foo.com/robots.txt and foo.com/robots.txt return the two different (and correct) robots.txt files, you'll be fine (You can easily test this using your browser).
Search engine robots expect <any-subdomain>.example.com and example.com to be separate Web sites, each with it's own unique robots.txt and unique pages, so your approach is the correct one. It when these requests are *not* handled separately by a server that you can get into trouble -- typically because of duplicate-content issues.