I've been able to get feedback from a LOT of small sites - my own, plus a sampling from appx. 2000 webmasters & developers on a private forum. One benefit is that it's often easier to isolate individual aspects than it is on larger sites.
The more I look at 'actuals' - self-reported 'actuals' included - the more I'm beginning to develop a "user experience" picture of Panda, similar to what Rand Fishkin has been promulgating (http://www.seomoz.org/blog/how-googles-panda-update-changed-seo-best-practices-forever-whiteboard-friday).
This is why there seems to be such a quandary for those who isolate certain factors and change them (removing 'low quality' pages for instance), yet don't see any correlating improvement.
Consider this: no change is 'registered' until the next Panda iteration, so no one gets any immediate feedback. And if the underlying 'measuring stick' is "user experience", many singular changes aren't going to impact that in a meaningful way quickly.
If "user experience" is algorithmically expressed as some summation of bounce rate, time on site, various social signals, etc., combined with some 'universal user experience' factors - content "quality" (algorithmically expressed through length, LSI-like relevance scores, etc.), PLUS the 'improved' analysis of backlinks, then it will take a combination of the RIGHT changes to turn that scoring around, and the time for those metrics to accumulate.