inbound - 8:13 pm on Oct 4, 2011 (gmt 0)
It's almost every day that I'm reminded how arrogant Google has become. The latest annoyance prompted me to think about the data-driven nature of Google and how that approach is fundamentally flawed when dealing with subjective content. If subjective content (anything which is not entirely factual and not up for debate) is a problem for Google, then Google has a major issue - given that subjectivity is what interests us.
Opinions are often more interesting, or revealing, than facts. Even when facts are known (such as the result of a sporting event) it's the related subjective things that are discussed (was that really a penalty? How did they play?). We are social animals and spending time expressing/hearing/reading/arguing opinions is much more interesting than dealing entirely in facts.
(It has to be said that there are times when facts are what people are looking for, and Google can be excellent for that)
There are lots of indicators that opinions matter; so many that we probably don't consciously notice many of them. One example, that's particularly relevant to the web is why people read more than one news article about the same story; the facts are the same but the opinions differ. And this leads onto something much more subtle than I could dare to cover well. The way in which you can write about any subject differs enormously - the beauty of language can be expressed fully by some talented writers to make you feel part of the matter being discussed.
Subjectivity and Creativity do not fit into a factual framework.
Google seem to think that personalisation is the way in which they will deal with this inconvenience. Much has been written about the problem of "filtering" search results to match the preferences of users - along the lines that always showing one side of a story is not a good thing. I agree that people should see conflicting views and make their own mind up, but the problem introduced by filtering (and the shortcomings of methods used) goes much further than that.
The state of the art is such that filtering is quite a good term to use for personalisation. Let's imagine an offline situation and apply "personalisation" to it:
Simon meets some friends and many different subjects are discussed. Some of the friends have differing views about the subjects (could be politics or the football team that they support), but the debate is enjoyable and everyone has the chance to put their opinion to the group. If Simon was using "personalisation" then there would be points in the conversation when some members of the group would suddenly disappear as their views conflict with Simon's (it would be more nuanced than that as the particular view would not lead to exclusion, more likely the broader stance such as which team a friend supports - so even when a sporting rival agrees with a point of view they are never heard). But, much worse than a person just disappearing, Simon would have no memory (at that point in the conversation) that the disappeared friend ever existed - so the friends differing opinions are never spoken or even asked for at a later date. Without even knowing it, the debate has become poorer (and views entrenched) because Simon now lives his life in a bubble which reflects his views.
There are much wider societal issues that arise from the personalisation of information. The lack of opposing views can lead to some real problems and even, in extreme examples, hold back the advance of knowledge as a whole (not just for the individual - think of the flat earth vs round earth "debate" and how that would work today if conflicting views were never aired).
Is this how Google thinks the online world should work?
Personalisation effectively means that some unknown criteria is used to decide which results are promoted and which are demoted for a search. The problem with the approach is that in order for this to "work" at all, there must be a way to categorise searches. The metrics used to categorise searches will never be able to mimic the intricacies of how humans decide what interactions they make in relation to a given issue. If a system has to make compromises then there will be times (and currently that's very often) when sites/views are excluded because the match a broader (or different) metric than the one that really matters with the given search.
I would go as far as saying the idea of personalisation, as it is currently being used, is flawed and can never achieve its goals. Personalisation can be used to great effect in limited scenarios, if there are several uses of a word or phrase and a person is only interested in one (e.g. Soccer vs Gridiron - both are called "football") it makes sense to deliver results for the preferred one.
The real problem that I see is the use of personalisation to segment views that are related to the same topic.
Are Google really saying that people are not smart enough to skip a result?
Google (or any search engine that is the default for any individual) has so much say in what people see nowadays that censorship (even on the basis of personalisation) should be seen as a very bad thing. In fact I'd say that the academic past of the founders should make them acutely aware that helping someone challenge their own beliefs is a noble pursuit; something that cash seems to have trumped.
I'm not talking about "site quality" here, that's a different issue. I'm lamenting about the lack of responsibility shown by those who should know better.
We should probably all mourn the passing of Google as a beacon of hope; long gone are the days when the primary objective of Google was intrinsically linked with all that is best in human endeavour.
Sadly; I believe that Google did once want to change the world, now they want to own it.