You ain't the only one, Google just wants to keep us on our toes.
Google insists it is adhering to the letter of the robots.txt law; it sure doesn't adhere to the spirit.
Google believes that it is allowed to list a page disallowed by robots.tx because it isn't actually retireving and indexing the page. All it is doing is listing the url for a page that it knows is there.
JD Morgan has pointed out a few times that the only way you can keep a page out of Google's index is completely non-intuitive -- you have to allow Google to spider the page and find the meta robots noindex tag. In time the page will drop from G's index. So take the disallow out of robots.txt and use the appropriate meta robots tags on each page you don't want in the index.
It's clunky, it's stupid, but it does work.
With Google it's the robots.txt mess, Yahoo/Ink has it's very own way of carrying 301 redirects. Wonder what hoops MSN is going to make us jump through when it launches? Aren't the standards there to make it easy for us to manage a website?