jmccormac - 2:17 am on Jun 20, 2013 (gmt 0)
The other major problem for Google is that Search fragmented a few years ago into Generic, Specific and Local. Generic is where the width of Google's search wins out. Specific used to be dominated by Google before the Animal Farm events and the emergence of Wikipedia. But Wikipedia managed to take a major part of this market away from Google. Schoolkids doing their homework and assignments now check Wikipedia rather than Google. That's a massive loss for Google and its damage has yet to play out since there is a generation coming up that doesn't consider Google to be the all-powerful search engine. Local is also a problem for Google. While it buys some credibility with Google maps and various alliances and purchases, there's a critical element in Local Search - it requires local knowledge.
At the time Facebook floated on the stockmarkets, Google was engaging in search engine development by press release. Perhaps some PR flunkies had come up with the Google 'knowledge graph' in a bid to attack Facebook's far more famous Social Graph. The press release story was recycled by clueless "technology" journalists (who wouldn't know one a search index from a hole in the ground) along with the what apparently was Amit Singhal's ambition to make Google capable of answering questions like the ship's computer in Star Trek (as he had apparently seen episodes growing up). There was just one problem with this story - the ship's computer wasn't really capable of answering questions and wasn't really used for such things. Spock (Star Trek:The Original Series), and later Data in Star Trek:The Next Generation provided the answers. Perhaps the PR flunkies in Google didn't really watch either Star Trek series. When a search engine starts engaging in development by press release it should be like a blood trail to a shark where independent search engine developers are concerned.
There's a lot of opportunity for independent search engine developers but most people who think that they are capable of building a search engine (virtually every webmaster thinks that they can do it) are not capable of doing so. There is a lot more to it than relying on blind crawling (where search engines detect new sites by crawling links). This thread from 2005 actually details some of the issues, especially when it comes to country level Search. ( [webmasterworld.com...] )
In some respects, relying on a blind crawling model is less effective now. It is also far more dangerous because some hacked sites can have links to very dodgy sites. And crawling those sites could create a toxic index (legally and technically). Due to a chronic overreliance on Google, many sites don't have a lot of outbound links to other in-context sites. A survey of Irish websites that I run every month actually zero depth (index page) counts links from sites and the most commonly linked sites are Facebook and Twitter. The internal link graph for Irish sites is quite sparse. If this is being played out on a global scale, it would explain why Google and other SEs have developed problems in detecting new websites that don't have Google Adsense or Google Analytics. Google Plus, in the Irish dataset, is being roundly beaten by Facebook, Twitter, Youtube (might be helpful for Google) and Linkedin. But these are outbound links rather than inbound links to individual sites. And without these small sites linking to each other, the blind crawling model currently used by the large SEs is dying. SEOs can help but most of the websites are brochureware sites and the owners are probably not even interested in paying for SEO services. Country level/Local Search are two areas where people can actually compete with Google and win.