Welcome to WebmasterWorld Guest from 18.104.22.168
Need in-depth research information about a product? Google won't provide it so searchers have to go elsewhere.
There's a lot of talk in the thread of how things should be done after the search engine is built but very little discussion on how to create an index or build the search engine
Finding pages to index (or consider for indexing) can be done in a variety of ways including crawling, webmaster submissions, scraping your way through various SE query results etc.It is a lot easier than that. However the problem is sorting out the gold from the dross.
Do I have to be fair? No. It's my index. If I want to let people know about your content, lucky you. Enjoy the traffic.
I just built an inverted index for the King James Bible in less than 30 seconds. Translate this to a couple hundred thousand rows of page data (which would make a pretty decent library of information on a given subject) and we've got the foundation of a fairly manageable index.
If a spammer makes a viagra index called king james bible you are informed and review the index and mark it as spam.Spam as the result of compromised websites would probably be more common in this scenario than genuine spam websites. The genuine spam websites tend, over the last few years, to have links from the compromised websites.
I could be wrong, but in the next 10 to 15 years we will look back on Google like you look back on Yahoo, and I believe they will be just about as relevant.
There's one critical difference between Google and Yahoo: Google continues to invest heavily in both its core product (search) and new products, while Yahoo has coasted along on the value of its 1990s brand name.Nah. Google buys a lot of its products because it hasn't the people or the ideas to develop them. Almost everything it has done for the last few years has been poorly executed, in many cases, me-too derivative stuff. (Google Plus, Orkut, Buzz etc.) Google is in trouble with search and it has become an advertising company where search is merely an outlet for its advertising service. Developing a product or service takes innovators and entrepreneurs rather than the joiners that just work for companies. Without that spark of innovation, that original idea, all you get is the me-too dross. It works for a while because many "technology" journalists haven't a clue about technology or the business of technology so they rely on press releases for their "knowledge". But sooner rather than later, those me-too businesses crash and burn. Then they are quietly shuttered and their employees are either fired or shuffled elsewhere in the corporation.
An alternative to Google's SERPs has to be better than Google, more precisely targeted than Google and give the people what they want.
It also has to be beneficial for webmasters in that they get traffic for their content and are not ripped off by having their sites massacred by the algorithmic brainfart of some individuals trying to repair the damage created by other individuals.
I'm not saying the beginning of that is happening here and now in this thread, but I do believe it will happen.
The irony would be delicious. :)
It would be hilarious if there was an online open source search engine revolution that decimated Google in 10-15 years and it started here, in this forum, in this thread and it was traced back.......
The thread says "started by Editorial Guy". Always the one you least expect lol ;)
What do you think is the best way forward, another global player or lots of niche/location/language specific ones?Universal search died in 2004, Brotherhood of LAN. The reason was that Wikipedia and the rise of Social Media killed it. For the last ten years or so, school kids and students have been using Wikipedia rather than Google. Even Google's Scraper Graph is a grudging acknowledgement of this fact. The future, I think, is a spectrum of search with a lot of niche search engines that may be accessed via a common interface or via their own interface. The main issue for this renaissance of search would be that each component search engine would have a high quality index.
The first post has EG's moniker on it and that's what people searching for the history of Google's downfall and the rise of webmaster-driven, open source search will see 15 years from now when they find this thread.
"1.17 billion Google searchers can't be right."
jmc said: The future, I think, is a spectrum of search with a lot of niche search engines
I'll even contribute a slogan for you to use in marketing a "webmaster-driven, open source search" engine:
"1.17 billion Google searchers can't be right."
Technology firms are always being replaced by newer more innovative firms, its just the way tech work.
The "fancy" part would be to classify the words in a theme taxonomy in order for niche providers to take a subset of the index in order to rank and serve results
And of course it brings me back to where I think Google could and should separate commerce from information generally. That IMHO, would clean up Google dramatically.
This also makes me think this is how Google could bust the whole thing e.g. by taking what it already has and breaking it apart into niche engines and serving up a front-end, developer API and other components.It already has form on this. When Wikia Search announced it was going to have a Social Media element to its search engine (along with voting on results), Google tried the same thing. It also tried to copy Wikipedia with its "Knol" service where editors would be paid to edit. Naturally, those people in Google completely missed the reason that people create and edit Wikipedia pages. It for the sheer joy of creating something of worth to others. The "Knol" service was quietly closed and people continued to use Wikipedia oblivious to the short existence of Google's poor attempt at a clone. Perhaps it was some kind of cultural clash between the some non-creative, money driven people in Google, who apparently thought they could buy Wikipedia's editors, and the web's creative and more altruistic people who often edit Wikipedia.
And of course it brings me back to where I think Google could and should separate commerce from information generally. That IMHO, would clean up Google dramatically.It might. But it faces the prospect of the commercial side becoming a Pay For Include with the non-commerical side being plastered with adverts and the Scraper Graph. Google may have a problem splitting the two from a commercial point of view.