The robots.txt file (and the robots) META tag are used to tell robots what they should and should not access.
Most robots follow the robots exclusion standard, which can be found at [info.webcrawler.com...]
The idea of limiting a robots crawling is that you might not want all pages indexed (any inexed page can be an entrypoint to your site). You wouldn't want to start your visit at the feedback page, or perhaps you dont want robots to index your pdf docs which you keep in a certain dir. Or there might be a section that's password protected, eg.
Robots is good for blocking off high profile directories. If it specific pages you want to block, I'd use a meta tag No Index tag - far more effective. Engines (like Google) will check for a robots once a month, whereas the meta tag is read every page read.