Powdork - 5:35 am on Aug 24, 2003 (gmt 0)
I don't think this is neccessarily the case. robot.txt says "do not index this file/directory", to me theres a difference between indexing a file and simply reading it to get a real representation of how the page looks
Disallowing through robots.txt will keep GoogleBot from ever crawling the page. Google can and does index urls that are disallowed through robots.txt. It will list only the url in the serps and the only factor applied when ranking these pages is the anchor text pointing at them.
Simply put, I f Google GETs a page that is disallowed through robots.txt you should contact them (after checking your syntax).:)