TheMadScientist - 12:49 am on Feb 21, 2010 (gmt 0)
Clearly disallowed in Robots.txt, each page has <meta name="robots" content="noindex, noarchive">
When you disallow a page in the robots.txt it will be indexed as URL only because the bot can't access the page to see your noindex,noarchive tag. This is Standard Procedure for search engines, not only Bing...
It's not Bing's fault you disallowed a page in the robots.txt (which they obviously obeyed) and didn't see your noindex tag. They handled it exactly as they should. They did not spider the page, which is what a disallow in the robots.txt tells them to do. Disallow does not mean 'noindex'. They're two different things and one cannot be used in conjunction with, or as a replacement for, the other...
Do not open the URL. Do not access the URL. The URL is 'off-limits'. The more links there are pointing to the Disallowed URL the more likely it is to be shown as a URL only result, because the bot cannot access the page to see if it's a content rich page visitors expect to see in the results or garbage, so they have to rely completely on links to the disallowed page to make any type of determination.
Make no reference to the URL in the results people see.