phranque - 3:20 pm on Oct 31, 2012 (gmt 0)
robots.txt is about crawling not indexing.
the Disallow directive means if the URL matches the pattern from left-to-right, don't request that URL.
if the pattern is just a slash "/" it matches everything because the slash means the root directory and matches everything in it.
so "Disallow: /" means don't crawl anything which actually prevents you from providing any indexing control.