Some search engines will list a link to a disallowed file if they find such a link
In the former case, you may need to block these bad-bots using .htaccess, browscap.ini, a firewall, or whatever is available to you.
In the latter case, even though the SE spider does not request the Disallowed file, it may still find the URL in links on your site, links on other sites, or even in a server log or collection of user bookmarks unintentionally left on-line somehwere. It will therefore list the URL without a title or description, but sometimes with the link text found on the page that links to it.
I have previously argued that this flies in the face of the intent of the robots exclusion standard, if not its literal wording. However, it depends on whether you define the word "index" to mean, "fetch a page and parse it" or "include it in our index". Some SE's won't mention a file that's disallowed with robots.txt, but some will - So I've learned, "that's life, get over it, find a work-around, and move on..."
About the only thing I know of to stop these search engines from listing the URL of a private page (without cloaking) is to link to your "private" pages only through another "linking page" that meets the following criteria: