Forum Moderators: goodroi
I think the crux of the matter is, how do robots crawl pages? Meaning, do they somehow automatically grab all the pages in a directory? Or can they only get those pages that are directly linked to from another page?
Also important for my site is how the robots handle .cgi pages that require login.
Does anyone have any thoughts?
Thanks,
Matthew