Just my opinion, but I don't think crawling is a serious cost problem at time with their infrastructure's capacity.
I also believe the size of the web in totality is much smaller than any of anticipate. Matt Cutts has said they could literally download the entire thing in one instantaneous crawl if the end-publisher-user's equipment could handle it.
There's just not enough people publishing content on the web. That's exactly why Panda is broken. Sure it may have strangled some of the rubbish, but a lot of good content got caught in the net. If things don't change, it's also going to be much more prohibitive for new publishers to come on the scene.