robots.txt is about crawling not indexing. the Disallow directive means if the URL matches the pattern from left-to-right, don't request that URL. if the pattern is just a slash "/" it matches everything because the slash means the root directory and matches everything in it. so "Disallow: /" means don't crawl anything which actually prevents you from providing any indexing control.
So this also says 'de-index' what you have already, right?
As pharanque said, robots.txt controls crawling and not indexing. It does NOT mean "de-index what you have already".
However, without crawling, google does not know what is on the page. So it has to rely to off-page signals only. And further, off-page signals that come from other pages within your site are also lost. Therefore, the ranking will almost certainly drop - which is what has happened to you.
But the URL will still be indexed. You can verify this by using Google search site: operator or a combination of Google search site: and inurl: operators to check a particular URL.
Hopefully by now you have re-instated the correct robots.txt. Your rankings will almost certainly return to where they were, but you need to give Google the time to pick up the new robots.txt and then to re-crawl all URL it was forbidden to crawl.
Only after re-crawling all URLs that were blocked should your rankings return to where they were(*) since within the site one page often supports the other, so it is not enough that your page that got blocked is re-crawled, but also the page that links to it must be re-crawled too.
So all that you can do now is just wait and monitor.
*Disclaimer: assumes no other algo changes in that period which would affect you
This discussion is related to some Disallow changes I recently added to avoid a faceted navigation spider trap - so I wanted to chime in with a related question. The disallowed urls stopped appearing in the site: inurl: within a week. This was for roughly a couple hundred thousand urls.
From a search engine perspective, is this enough keep those urls from counting for things like duplicate content checks? I've also added noindex, follow meta tag fwiw.