phranque - 5:48 am on Mar 29, 2012 (gmt 0)
the problem with the robots exclusion protocol is that there is no way to specify "don't crawl this url AND don't index this url."
it has been suggested before and would be very simple to have a Noindex directive for robots.txt that uses the same syntax as the Disallow.
that crabby guy set up a test for this and in 2008 through mid-2009 googlebot was respecting this experimental/undocumented directive.
so we know they have it in them to do it if they want...
i think the answer is to allow crawling and respond to bot requests with the "X-Robots-Tag: noindex" header and a "[Not Provided]" payload in the js/css file.