I have another gut feeling as the main target for panda has been an index of already flagged websites and not the entire web as general. Its just a theory but it might make a sense.
For example a website has been doing well but had a flag for some OOP or shady BL profile and its ranking factor has already been diminished by a small amount but not so noticeable. This website is flagged and it is lets say in the yellow zone for the sake of calling it zone(green->yellow->red), or trust.
So this index of websites in the yellow-red zones are the ones that have been continuously ran over and over again and once you get in the zone you are stuck there.
It just looks and feels like it, notice most of the top 20 for a specific terms, 2-3 websites will be bouncing in and out on panda iterations , but the top dogs even if poorly managed will stay as they have not done anything in ages and probably have not done anything to trigger any flag, even if their content is crap and its more than obvious not calculated between each other.
Panda just does not threat/rank every website equally that's the reality. I was thinking about authority but it still makes no sense , it has to be some flag which guides the algo. Think about it for a sec, if panda was run as they claimed why would there be only like 11.8% affected queries ?!?! and that was the main introductory update. The entire web should have been shuffled upside down but no, maybe during their internal testings they saw the real mess so they decided to affect only certain types of websites.
This is just a theory but it boggles my mind.