tedster - 4:16 pm on Aug 31, 2012 (gmt 0)
"Disallow" rules in robots.txt do stop googlebot crawling - but as you've discovered Google may still list a URL that they didn't crawl. This happens, for instance, if they know about the through internal or external backlinks and anchor text. Google then constructs a title and snippet for the URL just from references rather than by crawling the page directly.
If you really don't want to see a URL in the Google index... then yes, use a noindex robots meta tag instead of robots.txt rules. And remember to change your robots.txt file so you now ALLOW googlebot to crawl the page. Unless they crawl, they won't ever read the robots meta tag.