From what I've learned, googlebot does not usually crawl a site by following links the way a human user would. Instead, the crawl team LEARNS about other URLs by indexing pages. Then they put those URLs in a crawl list which is prioritized by a complex algorithm. The most common crawl is one where googlebot is "given its orders" from the beginning - a list of URLs to crawl on the site.
But I'm guessing that this phenomenon is more correlation than causation
Matt Cutts has confirmed something like that several times. Actually, it's a little bit stronger than simple correlation. Even though there are many factors affecting googlebot's crawl, the biggest determining factor is PageRank. It really is causation that's at work, but not exclusively.
Remember that every URL has its own PageRank score, so PR is not something that a "site" has. This means that the interlinking of your pages has a lot to do with how often any particular page will be crawled. That's because the pages' interlinking will determine how PR is circulated around all the pages of the site. Other factors can be things like the update history of a page's content. If a site tends to publish static pages and not change them, then there's lower reason to crawl them frequently.