I'm currently using a robots.txt file, and it validates. But I think I may be inadvertantly keeping Google out of some of my directories that I really wish they would crawl. I'm now wondering if I even need a robots.txt.
I think the crux of the matter is, how do robots crawl pages? Meaning, do they somehow automatically grab all the pages in a directory? Or can they only get those pages that are directly linked to from another page?
Also important for my site is how the robots handle .cgi pages that require login.