So this also says 'de-index' what you have already, right?
As pharanque said, robots.txt controls crawling and not indexing. It does NOT mean "de-index what you have already".
However, without crawling, google does not know what is on the page. So it has to rely to off-page signals only. And further, off-page signals that come from other pages within your site are also lost. Therefore, the ranking will almost certainly drop - which is what has happened to you.
But the URL will still be indexed. You can verify this by using Google search site: operator or a combination of Google search site: and inurl: operators to check a particular URL.
Hopefully by now you have re-instated the correct robots.txt. Your rankings will almost certainly return to where they were, but you need to give Google the time to pick up the new robots.txt and then to re-crawl all URL it was forbidden to crawl.
Only after re-crawling all URLs that were blocked should your rankings return to where they were(*) since within the site one page often supports the other, so it is not enough that your page that got blocked is re-crawled, but also the page that links to it must be re-crawled too.
So all that you can do now is just wait and monitor.
*Disclaimer: assumes no other algo changes in that period which would affect you