Msg#: 4079797 posted 1:29 pm on Mar 16, 2010 (gmt 0)
it depends on what you are looking for. robots.tx Disallow prevents crawling but it doesn't prevent indexing. therefore these directives will prevent the content of those pages from being crawled but they won't prevent the urls from being indexed. that requires a robots.txt that allows crawling and a robots noindex metatag in the document or the appropriate X-Robots-Tag http header.