tedster - 4:56 pm on Mar 29, 2011 (gmt 0)
Hello Aaron, and welcome to the forums.
My take on it is that the algorithm is much more complex than a simple "doing X will hurt a page by N%." To share more than they have already done (by describing the training set in this interview) would give away not only the specifics about what is being measured right now (and that will evolve) but also exactly how their processes work.
Google already gave webmasters a lot more detail than they did with other major updates, and more than any other search engine ever has. Also, even though there is a focus on what can hurt a site, the document classifier algorithm is also designed to classify some sites as high quality or mixed quality.
"we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons...