Welcome to WebmasterWorld Guest from 126.96.36.199
While Google won't crawl or index the content blocked by robots.txt, we might still find and index a disallowed URL from other places on the web. As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results.
Learn about robots.txt files [support.google.com]
Blocking crawling in robots.txt does not prevent indexing pages.A distinction should be made here between "page" and "url", as well as between in the "index" and "displayed in the serps".
"A description for this result is not available because of this site's robots.txt"
Why not explicitly allow Google Adsbot to a couple of pages and run a test adwords campaign for a few days, then check if the page is in index (other than with "A description for this result is not available because of this site's robots.txt")?Since these pages have been blocked for a long time, you might want to give them more than a few days to see if they get indexed. Please keep us posted on what happens. This is a very intriguing issue.