schuon - 5:10 pm on Sep 1, 2011 (gmt 0)
I wouldn't use robots.txt to block URLs because it's a waste of link power.
Well, if you block a page that get's external link power via robots.txt, that link power is lost. If you'd do a noindex, follow instead it can be passed on. In this case though, I'd assume you don't have that many external links on sort-by "price".
I used a canonical before to tell Google it's all the same page, and once the duplicate variants got removed from the index, I blocked it with robots.txt. From my experience stuff that was indexed and immediately got blocked via robots.txt tends to stay around in the index, somewhere deep down...