pageoneresults - 4:04 pm on Nov 11, 2011 (gmt 0)
A couple of months ago, we tried noindexing and blocking pages via robots.txt. Unfortunately, Google was still listing these pages in their SERPs but without a snippet below the title.
What you describe with the URI only listings is the default robots.txt behavior. The META (or X-Robots-Tag) NoIndex is at the document level. If you've Disallowed the bot from accessing the documents that contain the NoIndex directive, it will never see it, that's why your pages are still showing in the index with a URI only listing.
Remove the robots.txt directives and let the document level NoIndex do its thing. It works just as it says on the tin. I've been using it for years and I've never, ever, seen any of those documents appear in the index - ever.