Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: bakedjake
They have their own db which they cull from the web, and they say that they start with Zeal as a root database, and from what I've seen, they crawl DMOZ heavily as well.
Bottom line - since corp's can get thousands of listings cheap in the Zeal db as part of a 'package' any search there has got to be grossly in favor of that.
Relevance, freshness, and bandwidth are all seriously lacking there. To top it off, they return results very slow.
Perhaps someday, it'll get better. For example, LookSmart recently purchased Grub.org, which is a distributed crawler. So perhaps they will (if they get enough volunteers) increase their relevance, freshness, and over all quality.
Though I won't be holding my breath...
Since then there has been some developments i am sure everyone is aware of but in regards to Wisenut i think the most important has been the buyout of Grub by Looksmart in January.
I think Wisenut was essentially put on hold crawling until this new purchase in Grub began crawling,
As Jrobbio points out it is the intention of Looksmart to use the Grub results in Wisenut and it may be a little while until Wisenut is as relevant as we would like.
I am not convinced Looksmart is going to use Wisenut in any commercial form either other then to be used i think to backfill Looksmart and help with relevancy issues for Looksmart search results delivered to its partners.
Might be some major developments on that front this week.
Until they manage to get better than five-month-old results, they're going to continue to be a joke.
Now that LookSmart owns the Grub crawler and, according to the Grub's website ([grub.org ]), Wise Nut's db will be created from Grub's "real-time" crawl, WN's out of date index issues should quickly become a thing of the past.
I hope so, but considering that many, many grub crawlers have ignored robots.txt for so long, they are going to need to change the User-agent string in order to get to the whole web...
90% of all grubs I see do not fetch robots.txt - even if it's got a one-second expiry on it. I'm not sure if this is a grub flaw, or if there are a lot of grub user-agent spoofers out there.
Search engine crawlers need to properly identify themselves these days, and I wonder how they're going to get past that problem.