EditorialGuy - 7:19 pm on Jun 28, 2013 (gmt 0)
Maybe the Web has just become too big for spidered search engines in their current form. Even if the Web weren't a "cesspool" (Eric Schmidt's term) flooded by spam and autogenerated garbage, it would be a lot bigger than it was when Google got started 15 years ago.
Let's say I search on "red widgets." Am I looking for editorial information, a manufacturer's specifications and promo material, or a dealer? Even if Google knows my intent (e.g., editorial information), how can it possibly serve the best result on the first page of its search results? Google can try to keep me happy through personalization ("This guy clicked Wikipedia for his last 10 informational searches, so let's give him a Wikipedia result in the no. 1 spot"), but beyond that, "best" is hard for an algorithm--or a human librarian, for that matter--to determine with any degree of objectivity.