I'd say it's a very interesting idea. Since we're theorizing or "blue-skying it" here, a further extension of our idea occurred to me.
Google could begin with browser data know they can depend on to a high level of significance. Then they could extend that solid picture to analyze sites from screenshots where there have not been enough Chrome users to depend on the browser data alone.
One area that still doesn't fit with this theory for me is the fact that Panda is said to be directly about content quality - even to the degree that we were told what kind of questions were asked about websites at the very beginning of the process. So if this theory holds water, it still needs to be coupled with another kind of content analysis. Going with a right brain measure for what is essentially a left bran goal doesn't immediately make a lot of sense to me.
Another ingredient in the mix is that they don't need to Panda anlyze "every website" and we know, in fact that they didn't. Amit told us that Panda 2.0 went further into the long tail than Panda 1.0 did. So apparently they were first looking at the sites that get the most impressions in the search results, and then laster they were going deeper.