The robots.txt disallow stops the pages being crawled, so their content isn't fetched. The URLs can still turn up in SERPs as URL-only entries.
Adding the meta robots noindex tag to the pages allows them to be crawled, but nothing about those pages will turn up in SERPs, not even the URL.
The rel="canonical" tag would act as a hint to Google to index the other URL version. They don't have to follow that hint, but usually do.
Use just one method. You cannot combine them.