lucy24 - 11:20 pm on Jun 29, 2011 (gmt 0)
Google "outsources" its robots.txt handling. That is, instead of hitting robots.txt at the beginning of each visit and acting accordingly, it's got a separate robot that only reads robots.txt, and at some future time it passes the information along to all the other googlebots.*
The "crawl errors" list is pretty much a black mystery anyway. If you've got a small enough site that it all fits on one screen, you can see "detected" dates ranging back over months. And hiding behind the "linked from" pages will be things like sitemaps from 2008, or pages that themselves haven't existed in years. When the "Linked From" column says "unavailable", you know they've hit rock bottom because they're saying "We have no idea why we believe this page exists, but we're going to keep crawling it and posting it as an error anyway."
* Conversely, Bing seems to have a morbid fascination with robots.txt. They read mine more often in a day than they read all other files in a week.