So, I want to exclude all those tags(urls)being crawled and indexed by Google. I just want to exclude everything that contains '?tagid=' How do I do that? I see that I could block all those URLs that have '?' in them through robots, but I am concerned that it might block other important pages also.
The [path] value, if specified, is to be seen relative from the root of the website for which the robots.txt file was fetched (using the same protocol, port number, host and domain names). The path value must start with "/" to designate the root.
this means the crawler matches the url to be requested from left-to-right starting from the leading / which is the document root directory.
you'll need to answer these questions before you write a robots.txt file: do you want to exclude exactly /abcprompt.aspx or all paths? do you want to exclude only urls with exactly one parameter that is tagid or any query string with the tagid parameter?
and here's the other problem - you can use robots.txt to exclude googlebot from crawling but you can't use it to prevent google from indexing any urls it discovers. if you want to control indexing you will have to allow crawling of the url and provide either a meta robots noindex element in the document head or a X-Robots-Tag HTTP Response header with a noindex value.
That looks technically correct as long as the character string "tagid" doesn't appear in any other URLs except the ones you want to disallow crawling for. If you say "Disallow: /*?tagid" then including the "?" would limit the rule to just query string parameters - that might be even safer.
Another step you can take is to use your Webmaster Tools account to tell Google to ignore the "tagid" parameter. Look under the Configuration > URL Parameters section.