tedster - 1:40 am on Sep 5, 2011 (gmt 0)
It just doesnt read,
Ah, but that's the problem. Under some conditions which are not yet clear, googlebot does request and read disallowed URLs, and that's contrary to what they state they will do. And it seems to be too common to be only a one-off technical error.
It wouldn't be the first time a Google engineer programmed something that was contrary to someone else's policy at Google. They use a kind of rapid, agile development that seems to allow a lot of autonomy and the QA often comes later. An 80% OK is about what it takes for code to get pushed live, from what I've been reading in those new books that were published this year. And no company can thrive if they try for 100% on QA before going live.