Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: open
I'm being hit very hard by google's freshbot at the moment, and going deep too. At first glance at what is currently going on with the little guys, I had to check and double check that the IP's were 64.... (they are).
It's behaviour, in terms of hard hitting and depth of crawl (it's going through the entire site) is more like the character of the old deepbot.
In fact, it's identical behaviour to deepbot the last time it crawled this site back in April.
I'm interested in hearing from others who are seeing the same.
That way deepbot and freshbot can just be left running and there's no "google-dance" required - the PR iterations can be done on seperate machines on deepbot data, then the whole lot is drawn in by freshbot alongside it's normal rounds.
That way, minty fresh and a more regular cycle. Freshbot also knows better than deepbot which pages have actually changed.
A merging of the algorithms if you like.
Three Billion web pages.
I know, it's quite amazing but they do manage it!
Having over 200,000 (or whatever it is) PC's in a distributed network does help of course...
I wasn't trying to take anything away from the achievement google have made, but what I perhaps should have said is ".... it would not require much in the way of additional resources to do this...."
Well, for my site (over 100 pages), only 2 pages get crawled once a few days... googlebot never crawled more than 7 pages / day... And I don't see it that often... what would bring in count for if googlebot "likes" the site or not?
joined:Oct 22, 2002
I noticed crawler10 in my logs today crawling pages that I know are not in the index (pages were created yesterday), so do therefore, have no settled PR.
IP address was 64******* which, as has been pointed out is supposedly the freshbot.
To me this means, that either deepbot is out and about masquerading as freshbot, or that deep and fresh are now one and the same..
If freshbot is supposed to be taking over for deepbot then it is utterly lame. I had been assuming that Google was crawling less because it realized it didn't have the resources to do what it was trying to do. I'd rather see freshbot disappear and deepbot actually do the thing right than have freshbot mucking everything up.
At this point, freshbot has shown no ability to do what deepbot was able to do circa December/January.
I would agree. I put up a link to a ficticious page in order to see if FB would grab it. It did, which would indicate that this FB crawl from 64.68* is actually the deep crawl.
I don't see what this indicates...freshbot has always been about finding new links and new pages. Please elaborate.