This is rhetorical off course :)
Google I now know crawls all urls it find, whether blocked by robots, noindexed or nofollowed, and it indexes them all regardless, ok they provide no cache or details sometimes, yet sometimes I've seen the pages description on the url
Thing is the pages , largely product pages are many(000s) and content is directly from supplier, so I try to keep em out of the index.
Crawling these urls endlessly mean they don't get to crawl pages I've worked on,
I finally realised why sites i though had a few hundred pages were indicated as having many (000s)
So , If I block just Googlebot from such pages, does anyone know how Google reacts to such?