Back when AV was recovering from its Black Monday mode and started re-indexing one of my sites, it conspicuously indexed all of that site's CSS pages. These pages ranked pretty well for the company's name as a keyword for several weeks. Other than that I haven't had any CSS indexed.
We've been watching for it 2_much, and so far, no one has been able to show an se dl'ing CSS. If they wanted to be really sneaky about it, they could do it without anyone ever realizing it, but I doubt they have.
I've been thinking a lot about spiders and css as I use it more and more. With all the options and variety available through CSS, it seems to me that the SEs would have a heck of a time programming to automatically discern abuses from legitimiate uses.
I wouldn't want the job of trying to create that strategy. Even in a simple case, like same colored text -- which tag are you going to look for? How would you be sure, on an automated basis, that the declared color was really displaying against a same color background, amidst all the possibilities with absolute and relative positioning.
And what about the way Netscape CSS inheritance spazzes out all the time? How could a spider ever account for all that craziness?