It seems pretty straightforward to me. Regular googlebot respects robots.txt directives. If ronots.txt says "don't crawl this URL" then they'll never see any robots meta tag or X-Robots directive associated with that URL.
That's a straightforward technical reality. If you want a robots meta tag or X-robots to be seen, then you've got to let it be crawled. The essence is this:
robots.txt is about crawling
robots meta tag is about indexing