Hi there, Everyone:
The product pages on my ecommerce web site are (by default) available via multiple versions of the URL (namely, a long query string version, and a short version).
For years, I have simply blocked the long query string URLs via the robots.txt file (The long query string URLs have a "virtual" directory in the URL, so I just block that virtual directory).
But with "trust" being such an important issue after the Panda updates, I wonder if it might be better to unblock those URLs in robots.txt and just let the canonical tag take care of it.
In webmastertools, under crawl diagnostics, it lists something like 700 URLs blocked by Robots.txt, and if it is something that is being measured by google, I can't help but think that they are somehow using that information for something.