"All the stuff around the edges is the same on every page. In the middle where I expected to find a bunch of unique content, there are only a few words-- and one of them is 'no'."
Yah. Your average robot should be able to manage that.
Great concept, that's all good until you start to get down to the nuts and bolts, 700,000,000+ websites, each with their own "around the edges", some with no edges, some with multiple edges. I don't think we're talking about an "average" bot here any more when we're talking about a bot that can learn the structure, content, and intent of a 1 out of 700,000,000 website and then accurately decide, in much the same way a human would, if the page has value based solely on the contents of that page alone.
We spend a lot of time (myself included) bashing google for the things they apparently get wrong. We need to spend more time taking a look at what they get right and how they did it.