AlyssaS - 8:55 pm on Feb 12, 2011 (gmt 0)
Well, when they announced the scraper update in Jan, they mentioned the following:
The new classifier is better at detecting spam on individual web pages, e.g., repeated spammy words—the sort of phrases you tend to see in junky, automated, self-promoting blog comments
So - if they detect any of those spammy words on a page, they may assume that all the other links on that page are spam too. Perhaps they also have a filter based on how many links there are on the page too.
It lines up with what the NYT article said about that blog owner who got interviewed - I think he had an entire page full of links he was being paid for, and perhaps one of those links had the spammy words and doomed the whole lot.