Just for the record, I have a domain, which is entirely robot disallowed in the robots.txt since day one (user-agent: * , Disallow: /), yet G indexes about 90 pages from it (which is up from just 1 a few months ago), as the url only, and "A description for this result is not available because of this site's robots.txt – learn more" as the snippet. Now that's just plain rude!
It is a page-for-page duplicate of a part of another domain intended for a specific audience, with no advertisements included. But oddly enough G obviously knows what it is all about as they come up first in the SERPS for as little as a portion of the titles. So my guess is that G is probably also evaluating it as a duplicate of the other domain pages. Would consider password protecting it just to keep G out of it, but that is not really an option in this case.