pageoneresults - 5:25 pm on Jun 4, 2010 (gmt 0)
IOW: By allowing the pages to be crawled (using noindex rather than disallow) you allow the link, hierarchy and site structure picture to be completed and also 'capture' link weight from any inbound links to those pages from other sites and it gets passed around too, so with robots.txt disallow you have a 'link weight black hole' and with the noindex you complete the picture of the site structure and pass weight from links to those pages back to pages in the index.
That's probably one of the best summaries I've seen to date of what I think. Thank you! :)
regarding the bandwidth savings discussion, i am wondering what the effects would be if noindexed/nofollowed pages were user agent-cloaked such that the document contained a head with the necessary meta element but essentially no body content.
I think the effects would be exactly what you intended. But, this type of implementation requires a bit of regular maintenance and not something the average Webmaster have the ability to do. One of these days I'll get into the cloaking stuff. No I won't. :)