phranque - 6:48 am on Jul 2, 2013 (gmt 0)
there's nothing in robots.txt that names the googlebot?
that was my next question - not "googlebot" specifically, but any substring of googlebot's user agent string.
and none of those exclusions in your robots.txt fragment would necessarily match a /.../review/ subdirectory as indicated in your access log sample:
it has been mentioned numerous times in this thread that the noindex directive is irrelevant when you have excluded googlebot from crawling that url.
it's not useful information for your problem statement.
These pages show up in the SERPs from time to time with the "description blocked by robots.txt" statement.
if the description is blocked, so are all other meta elements.