tedster - 11:17 am on Jun 13, 2012 (gmt 0)
If you think about the robots.txt protocol from the point of view of programing a bot, the "Disallow" standard makes sense. You wouldn't usually want a potentially monster list of every URL you that were allowed to visit - just a few "keep out" notices.
Even though both Bing and Google say they now support a few extensions to the standard syntax, the actual current standard is explained here: [robotstxt.org...]
...and here is Google's Help page: [support.google.com...] If you start blocking some URLs or URL patterns, the details Google provides can become important for getting the exact results that you intended.