Forum Moderators: Robert Charlton & goodroi
No idea if they do this, but they should. No doubt they mine all the words and phrases searched for. They also probably know for each search how many SERPS there are.
So do they sort for the most frequently searched for terms that get insufficient SERPS, then orient their spider to visit pages relevant to those words (through text in the link, in the url, etc)?
Any other patterns anyone noticed to the order of their spidering?
The same is true in Google. 95% of the searches will find 5% of the sites in the index. Or, if Google crawls just 5% of the web, it has updated 95% of it's index that is visible to the public. Those, are pretty good odds and explains their rapid adoption two years ago of "freshbot".
There are many theories on how a good bot should crawl a site, and the order it should crawl it and some good docs on the web:
[dbpubs.stanford.edu:8090...]
However, given Googles' massive experience with surfer behavior and tracking via the toolbar data, we can not even remotely know how that has lead Gbot to be programmed.