RegDCP - 4:05 pm on Sep 18, 2012 (gmt 0)
We should think about a googlebot crawl differently from a conventional crawler. First there is URL discovery - just "what URLs exist". Those get put into a crawl list and then googlebot gets set to work through that list. So it's not like, on each crawl, googlebot is sort of sprawling out, downloading a page and then following every link on the page.
How did you come by this information tedster?