gmb21 - 9:35 pm on Mar 5, 2011 (gmt 0)
Looking at some of the sites which did well out of this update (e.g. Britannica.com) and sites which Matt Cutts said are good quality (e.g. New York Times), they are quite heavy on ads. Most pages have a 728x90 leaderboard and a 300x250 rectangle above the fold, in addition to other ads on the page.
Most of the "articles" on Britannica are only 2 or 3 sentences surrounded by ads (this is because you have to subscribe to get the full article). If this update was about thin content or too much ad real estate, then I would have expected Britannica to fall in the rankings.
In the interview referred to at the start of this thread, Matt Cutts claims that the algorithm was based on initial human choices of what was a good site and what was not, so they would have been careful not to choose factors that would demote these "good" sites.
I think you look for signals that recreate that same intuition, that same experience that you have as an engineer and that users have. Whenever we look at the most blocked sites, it did match our intuition and experience, but the key is, you also have your experience of the sorts of sites that are going to be adding value for users versus not adding value for users. And we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons …