I have mentioned the details in several places, but here's another summary. In this case, it looked like the site was hit because other sites were either reprinting their articles (with permission) or else flat-out scraping. The site had lost the authority/trust needed to be credited as the original publisher. This fact jumped out during the analysis.
In addition, there was some fluff on the site, written basically to rank. And finally, some canonical issues were creating internal duplicate URL issues. Nevertheless, I should emphasize that the site's foundations were and are really solid.
Step 1 - get rid of the fluff
Steo 2 - fix the canonical issues
Step 3 - get the content to begin higher on the page (it was sometimes below the fold)
Step 4 - the pages were overly ad-stuffed. That was backed off
Step 5 - begin rel="author" mark-up
Step 6 - begin using pubsubhubbub to send "fat pings" to Google whenever something new is published
Step 7 - delay the RSS feed for an hour after publication
My gut feeling is that the entire combination of steps probably helped - except I'm not convinced that canonical issues are part of Panda. But the most important steps, IMO, were 5, 6 & 7. They are aimed squarely at regaining credit for the content. The site has some really good authors and they deserve the credit, which they now get, little headshot photos and all ;)