There is a Google patent about Automatic taxonomy generation in search results using phrases [patft.uspto.gov] that goes into some depth on the way Google engineers classify things. As with any patent, particular parts of it may or may not be in play - but it gives a window into the way they think.
We also have a discussion about this possibility - Google & Traffic Shaping [webmasterworld.com].
My own feeling is that Google creates and maintains taxomonies dynamically - for types of search queries, for types of users, and for types of pages (the newly discussed document classifier system.) They are always dynamically working that system to match query terms to the pages that match the taxonomy best - as measured partly by click-stream data. If a site receives a dynamically shifting taxonomy, that is, if Google's system is having a challenge to find the right taxonomy for it, that's when traffic quality tends to oscillate.
However, that theory doesn't fully explain why the traffic would differ in quality when rankings appear to be the same. My best guess is that it's the "user type" taxonomy that is changed - and that axonomy might be geographic, or degree of technical savvy, or native language - or lots of other possibilities.
So unless you can do the search as a completely different kind of user, you would have trouble seeing the ranking changes.
It's a lot of guesswork, based on studying patents. I haven't seen any other discussion online - just the ones we're having here with members like Shaddows and backdraft7.