Robert_Charlton - 9:20 am on Jul 4, 2013 (gmt 0)
shaunm - "It's a MISTAKE". You definitely do not want the same pages returned as 200 OK under different urls. That is what's called "duplicate content". Our "Hot Topics" section is currently down for maintenance and updating, but I'd suggest looking at Hot Topics [webmasterworld.com] as it is and reading all that's there in the Duplicate Content section.
Regarding the current problem... I'm guessing that you've got a shopping cart on your site, which is going to involve pages with https protocol somewhere... and, as aakk9999 suggests, you're using relative or root relative urls in your nav links.
In layman's terms, here's how the problem generally happens... [webmasterworld.com...]
A year or two ago one site I work with had problems with https duplicates getting indexed. The origin of the problem was that legitimate https pages in the shopping cart were using the same templates as the rest of the site, which mostly used relative URLs for navigation.
The relative URLs meant that https pages were effectively linking to other pages as https too, so they'd get spidered as https.
When a page whose URL has unintentionally become "https-ified" is being spidered, any of its links which were relative URLs would become https too. That's how the cancer spreads and duplicate problems grow....
In my experience, robots.txt isn't going to control this. Sometimes, if you have a secure subdomain... as in http://secure.example.com ...for pages intended to be https, you can use robots.txt to control Google access to the secure subdomain, but that's not going to fully fix the problem. The secure subdomain is necessary to allow robots.txt to control access to the pages you want to be secure... there's no other way, I believe, to use robots.txt to control access to the http vs https protocols... but once you have the dupe https urls out there in the wild, robots.txt won't fix the problem, as references to the https protocol have spread beyond the pages that you would desire to be secure. (Hope you can follow that... it's a mouthful).
Note that the rel="canonical" link can help fix the problem for Google indexing, but not necessarily for the user. Also, as I remember, you have a IIS content management system which automatically sets the rel="canonical" url. This can make the problem worse. (We never did discuss how to turn this off, but you should find out, and make those settings manual).
As phranque suggests, 301 redirecting all requests to the proper canonical form of your urls is the proper way to handle the situation... but you should first change the relative urls in the secure pages in your CMS to the desired absolute urls. "https" throughout the site, on http pages, is not desirable.
I don't know how your CMS would interact with ISAPI Rewrite, but I would use ISAPI rewrite to do a full "canonicalization" of your site. Read about canonicalization also in Hot Topics. I'd get a specialist to do the programming.