phranque - 12:04 pm on Nov 11, 2012 (gmt 0)
the Disallow: directive matches URLs, left to right, that are to be excluded from crawling by "well-behaved bots".
if you have any content on urls that match those paths that you want indexed then you should change your robots.txt file accordingly.
however excluding a URL from being crawled does not prevent the URL from being indexed, it prevents the content from being indexed.
if you don't want either to appear in the index you must allow crawling of the URL and provide a noindex signal such as the meta robots noindex element for HTML documents or the X-Robots-Tag HTTP Response header for other content types.