I must admit I had previously thought that robots.txt stopped Google crawling the 'disallowed' pages.
It does. What you are seeing is the result of a robots.txt entry discovery by Google. It is a URI only entry and is usually invoked through specific search queries such as site:example.com. This is one of the reasons why I feel using robots.txt is not the best option to prevent URIs from getting indexed. I say URIs because in this instance, that is all it is, a URI only entry. I've seen sites with tens of thousands of them when performing site: searches.
The document in this example contained the noindex, nofollow directive. Unfortunately it was Disallowed via robots.txt. Anything Disallowed via robots.txt will override that which resides at the page level. In this case, the robots.txt entry needs to be removed so Googlebot can get to the page in question and see the noindex, nofollow directive.