Forum Moderators: Robert Charlton & goodroi
We should think about a googlebot crawl differently from a conventional crawler. First there is URL discovery - just "what URLs exist". Those get put into a crawl list and then googlebot gets set to work through that list. So it's not like, on each crawl, googlebot is sort of sprawling out, downloading a page and then following every link on the page.