Planet13 - 7:45 pm on Apr 23, 2011 (gmt 0)
Hmmm... according to google, it looks like canonical link tag might be the way to go. From the following post at [googlewebmastercentral.blogspot.com...]
One item which is missing from this list is disallowing crawling of duplicate content with your robots.txt file. We now recommend not blocking access to duplicate content on your website, whether with a robots.txt file or other methods. Instead, use the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. If access to duplicate content is entirely blocked, search engines effectively have to treat those URLs as separate, unique pages since they cannot know that they're actually just different URLs for the same content.
Anyone have any opinions before I drop the robots.txt disallow and rely on the canonical tag?