robots.txt will exclude the bot from crawling the content but won't prevent google from including those urls in the index.
depending on the details of your situation, the better solution may be one of the following:
- redirect those requests to the canonical urls
- meta robots noindex the documents served from non-canonical urls
- use a link rel canonical element
- use the ignore parameters feature in GWT if appropriate
then you should look for where google discovered those non-canonical urls and if that situation is under your control you should fix it at the source.