phranque - 11:22 am on Apr 9, 2013 (gmt 0)
This document details how Google handles the robots.txt file:
you'll want to use the Disallow: directive.
The [path] value, if specified, is to be seen relative from the root of the website for which the robots.txt file was fetched (using the same protocol, port number, host and domain names). The path value must start with "/" to designate the root.
this means the crawler matches the url to be requested from left-to-right starting from the leading / which is the document root directory.
you'll need to answer these questions before you write a robots.txt file:
do you want to exclude exactly /abcprompt.aspx or all paths?
do you want to exclude only urls with exactly one parameter that is tagid or any query string with the tagid parameter?
and here's the other problem - you can use robots.txt to exclude googlebot from crawling but you can't use it to prevent google from indexing any urls it discovers.
if you want to control indexing you will have to allow crawling of the url and provide either a meta robots noindex element in the document head or a X-Robots-Tag HTTP Response header with a noindex value.
Robots meta tag and X-Robots-Tag HTTP header specifications - Webmasters - Google Developers: