As of few weeks ago I have noticed that Google WMT reports duplicate titles and descriptions for URLs that are blocked by robots.txt. I have verified via "test robots.txt" feature in WMT that the robots.txt directive is constructed correctly and that these URLs are disallowed.
It however seems Google has crawled these pages anyway, otherwise how could it find out what the title element and description meta tag of these URLs is in order to report duplicates in WMT.
These URLs are product searches based on entering dates, so obviously the permutations will be endless. I am now concerned that crawling all these URLs may have impact on crawling budget and ultimately, on site ranking.