Shaddows - 2:20 pm on Sep 5, 2012 (gmt 0)
Index in that context means "comsume for ranking".
Technically, every page that is crawled gets indexed in the sense it gets rated, and sharded off into the distributed database.
All the "noindex" directive does is hide something from SERPs. There is an assumption that it stops pages from affecting sitewide factors, but such pages definitely pass PageRank.
This is one of the areas where precise terminology is key. However, the vast majority of casual conversations (and indeed some official resources) tend to be quite lax.
robots.txt stops Google fetching the page (including headers)
noindex is a directive that is only actioned once the page is fetched, and keeps a page out of SERPs
"indexed" means EITHER a page has been consumed by the algo, OR that is is showing in SERPs - and the vast overlap in those two groups means there is plenty of room for confusion.