Sounds good...will be curious to see if robots.txt + x-robots can defeat sitemap.xml!
What Lucy24 said reiterated: You *cannot* block a page from being crawled in robots.txt *and* noindex the page to have it removed, because if Google finds links to the page it usually *will* be indexed based on the information in and surrounding the links to it even though they cannot crawl the page itself.
If you want a page to be removed from the index you *must* allow GoogleBot to crawl it and either have noindex on the page, noindex in an X-Robots-Tag header for the page *or* serve GoogleBot an error code such as 403 when the page is requested.
Also, I'm really not sure I understand why you're explicitly telling GoogleBot how to find pages you don't want indexed. Seems a bit like "conflicting signals" you're sending to me, unless you're just trying to get them crawled so the noindex or 403 error is seen and then they are going to be removed from the XML Sitemap.
The "indexing system" is just that, a system [not a person], so I think it's always best to send the clearest message you can.