Welcome to WebmasterWorld Guest from 220.127.116.11
joined:Aug 25, 2012
We should think about a googlebot crawl differently from a conventional crawler. First there is URL discovery - just "what URLs exist". Those get put into a crawl list and then googlebot gets set to work through that list. So it's not like, on each crawl, googlebot is sort of sprawling out, downloading a page and then following every link on the page.