aakk9999 - 11:58 am on Jul 1, 2013 (gmt 0)
There is one thing I noticed some time ago: If a robots.txt includes a pattern that is at the very end of a long-ish URL, then sometime "robots.txt" exclusion does not work.
I noticed this when I blocked some URLs based on parameters that were at the end of long-ish URL. When I changed the pattern blocking, then robots.txt exclusion worked.
@Convergence, on the another note - even though Gooblebot requested these URLs, from what I understood from your post, it still did not show the URL title/description from the fetched page, i.e. the SERPs showed URL and Google created meta "A description for this result is not available..etc"
So I wonder why did it fetch if it did not then peak into it.
If you search for a unique sentence from one of these pages, do they show in SERPs?