I find that when combining the site and minus operator, certain parts of certain pages appear to be ignored, and I wonder if this has anything to do with Google selectively indexing, searching and/or ranking content. Consider the query
site:webmasterworld.com -Welcome
. The word "Welcome" appears in the header on practically every page here. Many of the pages returned in these SERPs are what you might call 'thin' or expect to not be visited very often, if at all (on top of those tens of thousands of pages of printer-friendly content, one page for every single post -- why isn't this blocked via robots.txt?).
One of my sites isn't doing too well in Google at the moment. When I do a site query on the domain and use the minus operator to exclude a word from the footer, about 70% of its pages are returned. That's a significant number. On another site, which is doing better, a similar query returns zero results.
Boilerplate bits of pages such as "Welcome" here can, I suppose, be safely ignored, and probably shrink the index a little bit, but what might the selection of these ignored elements, and the pages returned for a query such as the above, tell us about rankings post-Panda?