homepage Welcome to WebmasterWorld Guest from 54.145.191.14
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Search Engines / Alternative Search Engines
Forum Library, Charter, Moderators: bakedjake

Alternative Search Engines Forum

This 103 message thread spans 4 pages: < < 103 ( 1 2 3 [4]     
Founder of Wikipedia plans search engine to rival Google
Amazon.com is linked with project ...
JackR

5+ Year Member



 
Msg#: 3198055 posted 10:50 am on Dec 23, 2006 (gmt 0)

The Times, December 23, 2006

Founder of Wikipedia plans search engine to rival Google
James Doran, Tampa, Florida

-Amazon.com is linked with project
-Launch scheduled for early next year

Jimmy Wales, the founder of Wikipedia, the online encyclopaedia, is set to launch an internet search engine with amazon.com that he hopes will become a rival to Google and Yahoo!

..."Essentially, if you consider one of the basic tasks of a search engine, it is to make a decision: 'this page is good, this page sucks'," Mr Wales said. "Computers are notoriously bad at making such judgments, so algorithmic search has to go about it in a roundabout way.

"But we have a really great method for doing that ourselves," he added. "We just look at the page. It usually only takes a second to figure out if the page is good, so the key here is building a community of trust that can do that."

...Catching up with Google, Yahoo!, Microsoft's MSN or even smaller operators such as Ask.com will be a difficult challenge, Mr Wales conceded.

[business.timesonline.co.uk...]

[edited by: tedster at 12:08 pm (utc) on Dec. 23, 2006]
[edit reason] fair use of copyrighted material [/edit]

 

Crush

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3198055 posted 8:06 pm on Dec 27, 2006 (gmt 0)

Wikisari would be the crappiest name. Needs something with 2 oo's like the other 2 main engines ;)

jtara

WebmasterWorld Senior Member jtara us a WebmasterWorld Top Contributor of All Time 5+ Year Member



 
Msg#: 3198055 posted 9:45 pm on Dec 27, 2006 (gmt 0)

WikiaSearch will search among external links from Wikipedia. The bulk of the the profits from WikiaSearch will be donated to Wikipedia.

Oops! I meant WIKISearch.

WIKISearch = new project, urelated to this one, which will search among sites that are linked-out from Wikipedia.

WIKIASearch = old project, dead.

Unamed, as of yet = the project we are discussing.

It's all very confusing to say the least. There are already pundits claiming that they are now stuck with the name WikiaSearch, because it is already so out there in the press.

Most insitefull comment on their discussion board: "It needs a name."

sonny

10+ Year Member



 
Msg#: 3198055 posted 4:30 am on Dec 28, 2006 (gmt 0)

wikisearch.com is owned by a non-wikied company in London. Wonder if they have been approached by the "Master Wiki" about purchasing it for a substantial sum of wiki cash.
Think I'll get my wiki on the phone and call them, ahem, on a non-wiki related matter of course.

Edwin

10+ Year Member



 
Msg#: 3198055 posted 12:31 pm on Dec 28, 2006 (gmt 0)

Here's something to think about (and I build content sites too, so I realise that my own foot is one of the ones that I might potentially be lining up to shoot with this)... how often in 2006 (and soon to be 2007) does a significant new resource come along about any given niche topic that is really head and shoulders above the existing corpus of material on that topic?

For example, if we're talking about widgets, and there are 50,000 pages that relate directly to widgets, how often will a new site/page be created that is REALLY hand-on-heart, all-biases-aside genuinely better and more useful than say the top 100 biggest/most comprehensive/most popular/most in-depth of the 50,000 pages that have come before it?

Seems to me that there needs to be a way to establish a baseline "list of useful sites/pages" about any given niche topic, and then a separate and complementary process to elevate the relatively rare new-but-worthy site/page to be included in that list.

After all, as was already brought up in this thread, the search engines only show the first 1,000 results for a query in practice, regardless of how many results exist - and of those 1,000 results only the first few pages' worth are of any real consequence unless we want to start delving into the fractions of 1% of overall traffic that hit the deeper pages within those 1,000 results.

Here's the billion dollar question: Does a search engine USER get a better experience from seeing 78,000,000 results for a given query, or 100 (or 200 or 500 or whatever) hand-picked, relevant results, with zero spam or off-topic material?

Take a really deep breath, and step away from the webmaster aspects for a minute and say to yourself, which of these experiences would leave an average user happier, more informed, more likely to use that particular search engine again?

Does seeing "78,000,000 results" say to the average user "we're incredibly thorough, and you won't miss ANY information on this topic, no matter how insignificant" or does it say "good luck finding the needle of information you really need in this massive haystack of information we can offer you"?

One way to improve the perceived search experience could be to use existing search popularity data to help focus the manual fine-tuning effort, and gradually move down the long tail as the "big" subjects get taken care of.

First, you fine-tune the results for the top 10,000 searches (and synonyms thereof), then the next 100,000 and so on down the line. The smaller the number of searches, the less important it is to fine-tune them initially since the vast majority of people won't ever see them.

Even when people hit on "untuned" results - which would happen relatively often at first - they would also hit on tuned results even more often... so the overall impression of this hypothetical new search player might still be better than that formed from existing competitors which rely primarily (or solely) on smart algorithms.

Even if you lined up every "webmaster" in the world, plus everyone who's ever started a blog (and kept it going long enough to "care" that it got indexed by the search engines), or posted a list of URLs on a social bookmarking site, that's just a fraction of the total online population. Consumers of information outnumber regular producers of information by (probably - I don't have the exact stats) 50:1 and they outnumber commercial producers of information (those who depend on the web for income) by perhaps 200:1 or more.

So if you can come up with a way to satisfy those average users, you could theoretically be onto a winner even if you end up stepping on the toes of 95% of all webmasters out there (by snubbing their sites) in doing so.

[edited by: Edwin at 12:35 pm (utc) on Dec. 28, 2006]

gibbergibber

10+ Year Member



 
Msg#: 3198055 posted 6:06 pm on Dec 29, 2006 (gmt 0)

-- A search on "digital cameras" claims 84,000,000 results. But try to look at more than 1,000 of those results. So how many pages do you actually need/use in a SE? --

You're making the mistake of thinking that because 1000 pages out of 84 million are relevant to YOUR needs, that those same 1000 pages would be relevant to ANYONE's needs.

Yes, maybe for any one individual they only need 1000 results, but they're not going to be the same 1000 results for every individual.

In fact with a term as vague as "digital cameras" there's going to be such a wide range of answers required (all those different models, shops, repair information, spare parts, technology info etc etc) that perhaps you DO need most of those 84 million pages in order to answer all the possible lines of enquiry that might be related to such a search.

old_expat

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3198055 posted 3:24 am on Dec 30, 2006 (gmt 0)

A search on "digital cameras" claims 84,000,000 results. But try to look at more than 1,000 of those results. So how many pages do you actually need/use in a SE?

You're making the mistake of thinking that because 1000 pages out of 84 million are relevant to YOUR needs, that those same 1000 pages would be relevant to ANYONE's needs.


I didn't sign MY name in any "who is this block of results for?"

Yes, maybe for any one individual they only need 1000 results, but they're not going to be the same 1000 results for every individual.

Relatively speaking, the same 1,000 results will be available for whomever makes that same search .. unless you know a way of getting different results for the same search term .. other than waiting for the next dance ..:)

But I believe you missed my point entirely .. or actually someone previous poster's point .. about how many pages actually make up the core of the web.

Reno

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3198055 posted 3:37 am on Dec 30, 2006 (gmt 0)

Founder of Wikipedia plans search engine to rival Google

The fact that at the end of 2006 we are discussing the idea of a human powered search service as anything other than a 1995 era fantasy just goes to show how deeply the World-Wide-Webmaster community is craving some sort of real competition for the thousand pound gorilla named Google.

My prediction: This ain't it.

Happy New Year ;-]

....................................

percentages

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3198055 posted 8:42 am on Jan 1, 2007 (gmt 0)

>Probable search reality: Google currently uses several THOUSAND people to editorialize on the current search indexe. eg: there are a massive amounts of hand checked pages.

Would that not be simply pointless?

It might be true today......but, would it not be pointless?

How do you index something that can grow so much faster than you?

By now Google must have concluded that it can't win the game based upon speed, and is looking for intelligence. It will lack here, as it is too young!

timster

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3198055 posted 3:53 pm on Jan 4, 2007 (gmt 0)

Maybe I'm reading something into the original announcement, but I didn't see this project as a plan to create a new web directory like DMOZ. Wikipedia already is the new DMOZ.

It seems they're looking to create a new thing where the humans helps shape the SERPs. Ever done a search and wanted a "This is junk" link next to certain results? (Maybe on all the sites above your own, but then, maybe not.)

The big challenge will be making sure the system doesn't get gamed, and certainly they're already making their plan, namely the "community of trust" they mention.

On a side note...
Does seeing "78,000,000 results" say to the average user "we're incredibly thorough..." or does it say "good luck finding the needle of information..."?

Give the average user a little more credit than that. I think they know that means, "There's tons of info on that. If you don't see what you need, be more specific."

SEOold

5+ Year Member



 
Msg#: 3198055 posted 10:23 pm on Jan 4, 2007 (gmt 0)

Almost on every instance if you search for something on Google there is a Wikipedia result, a smart move I would say but lets see how Google responds to this.

piplio

5+ Year Member



 
Msg#: 3198055 posted 6:52 am on Jan 7, 2007 (gmt 0)

Good move.

Unleash the power of collaboration.

Then, please open source the search engine. :)

Wikimedia is great!

RichTC

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3198055 posted 2:35 am on Jan 8, 2007 (gmt 0)

It will be interesting to see how this one works out.

Another player in this space has to be a good thing if only to keep google on its toes.

It will however take a long time for it to get the sort of search data and history data that google holds so i cant see it making much impact in the early days - you only neeed to see how bad msn is to realize it takes a long time to establish a rival search engine.

Also, i dont think human input is the answer. It would be a short matter of time before its gamed by groups of individuals being paid to vote for certain sites or bash competing sites - but if they can work out a way to stop that then maybe its got potential?

Time will tell, but im certainly in favour of a fresh player entering the market - its certainly big enough for one more!

hybrid6studios

5+ Year Member



 
Msg#: 3198055 posted 5:28 pm on Jan 11, 2007 (gmt 0)

I think it's an interesting idea...I think it *COULD* work...if it's done right. I was skeptical about Wikipedia, butit's pretty impressive. I haven't been able to find half of what's in there in a regular encyclopedia. It's stays pretty up to date, and it's pretty accurate. I'm not gonna rule this thing out yet.

This 103 message thread spans 4 pages: < < 103 ( 1 2 3 [4]
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Alternative Search Engines
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved