|Why Google's Core Objective is Derailed by Subjectivity|
You simply cannot "organise" everything
It's almost every day that I'm reminded how arrogant Google has become. The latest annoyance prompted me to think about the data-driven nature of Google and how that approach is fundamentally flawed when dealing with subjective content. If subjective content (anything which is not entirely factual and not up for debate) is a problem for Google, then Google has a major issue - given that subjectivity is what interests us.
Opinions are often more interesting, or revealing, than facts. Even when facts are known (such as the result of a sporting event) it's the related subjective things that are discussed (was that really a penalty? How did they play?). We are social animals and spending time expressing/hearing/reading/arguing opinions is much more interesting than dealing entirely in facts.
(It has to be said that there are times when facts are what people are looking for, and Google can be excellent for that)
There are lots of indicators that opinions matter; so many that we probably don't consciously notice many of them. One example, that's particularly relevant to the web is why people read more than one news article about the same story; the facts are the same but the opinions differ. And this leads onto something much more subtle than I could dare to cover well. The way in which you can write about any subject differs enormously - the beauty of language can be expressed fully by some talented writers to make you feel part of the matter being discussed.
Subjectivity and Creativity do not fit into a factual framework.
Google seem to think that personalisation is the way in which they will deal with this inconvenience. Much has been written about the problem of "filtering" search results to match the preferences of users - along the lines that always showing one side of a story is not a good thing. I agree that people should see conflicting views and make their own mind up, but the problem introduced by filtering (and the shortcomings of methods used) goes much further than that.
The state of the art is such that filtering is quite a good term to use for personalisation. Let's imagine an offline situation and apply "personalisation" to it:
Simon meets some friends and many different subjects are discussed. Some of the friends have differing views about the subjects (could be politics or the football team that they support), but the debate is enjoyable and everyone has the chance to put their opinion to the group. If Simon was using "personalisation" then there would be points in the conversation when some members of the group would suddenly disappear as their views conflict with Simon's (it would be more nuanced than that as the particular view would not lead to exclusion, more likely the broader stance such as which team a friend supports - so even when a sporting rival agrees with a point of view they are never heard). But, much worse than a person just disappearing, Simon would have no memory (at that point in the conversation) that the disappeared friend ever existed - so the friends differing opinions are never spoken or even asked for at a later date. Without even knowing it, the debate has become poorer (and views entrenched) because Simon now lives his life in a bubble which reflects his views.
There are much wider societal issues that arise from the personalisation of information. The lack of opposing views can lead to some real problems and even, in extreme examples, hold back the advance of knowledge as a whole (not just for the individual - think of the flat earth vs round earth "debate" and how that would work today if conflicting views were never aired).
Is this how Google thinks the online world should work?
Personalisation effectively means that some unknown criteria is used to decide which results are promoted and which are demoted for a search. The problem with the approach is that in order for this to "work" at all, there must be a way to categorise searches. The metrics used to categorise searches will never be able to mimic the intricacies of how humans decide what interactions they make in relation to a given issue. If a system has to make compromises then there will be times (and currently that's very often) when sites/views are excluded because the match a broader (or different) metric than the one that really matters with the given search.
I would go as far as saying the idea of personalisation, as it is currently being used, is flawed and can never achieve its goals. Personalisation can be used to great effect in limited scenarios, if there are several uses of a word or phrase and a person is only interested in one (e.g. Soccer vs Gridiron - both are called "football") it makes sense to deliver results for the preferred one.
The real problem that I see is the use of personalisation to segment views that are related to the same topic.
Are Google really saying that people are not smart enough to skip a result?
Google (or any search engine that is the default for any individual) has so much say in what people see nowadays that censorship (even on the basis of personalisation) should be seen as a very bad thing. In fact I'd say that the academic past of the founders should make them acutely aware that helping someone challenge their own beliefs is a noble pursuit; something that cash seems to have trumped.
I'm not talking about "site quality" here, that's a different issue. I'm lamenting about the lack of responsibility shown by those who should know better.
We should probably all mourn the passing of Google as a beacon of hope; long gone are the days when the primary objective of Google was intrinsically linked with all that is best in human endeavour.
Sadly; I believe that Google did once want to change the world, now they want to own it.
I agree with most of what's said here about search personalization. I don't feel, though, that it is simply a Google problem.
There's a Ted Talk video on the topic that's well worth seeing, which expands on some points made above. I think it's worth posting, not to sidetrack this discussion, but to amplify it....
Eli Pariser: Beware online "filter bubbles"
TED Talks, May 2011
The topic of personalization in search was also discussed, with the above video cited, on this thread in our MSN Search forum....
Bing Adds Facebook 'Friend Effect' With Collective IQ To Search
I've never been happy with any attempts I've ever seen at machine personalization of my preferences or personal taste. From time to time, I'm even dismayed at assumptions made by my friends. ;)
I'd very much like a switch to turn search personalization off, but, I suspect that increasingly the structure being built into the systems of the major players will make results without some kind of personalization impossible, at least on those platforms. What I'm not seeing anybody building is a system that will allow us to follow our own curiosity without presuppositions about our taste and preferences.
The first intrusion is the autocomplete functions. Don't like... I know what I'm looking for. If I need "fuzzy" I'll broaden the parameters...
Then again, I know what I'm doing and the vast majority of Joe and Sue Users have no clue.
|people read more than one news article about the same story; the facts are the same but the opinions differ |
I'm not sure that's true. It's one reason people read more than one article, but there are other reasons such as:
1. The treasure hunt: hoping to find more facts (why people cruise flea markets for junk they don't need).
2. The reality check: hoping to corroborate the facts in the first story (why journalists look for at least two sources)
3. The adrenaline rush: hoping to reexperience the thrill of learning those facts (this is why we read the same favorite novel multiple times).
Of course, this to some degree underlines your point: people are often looking for something in addition to a piece of knowledge.
|If Simon was using "personalisation" then there would be points in the conversation when some members of the group would suddenly disappear as their views conflict with Simon's |
And yet, this is what happens, just over a larger time scale. Popular as the Tea Party is in the US, I literally don't know a single person that I hang out with often who identifies with it. Why? Because over many years, my "personalization" algorithms have walled me off in some way from those people. It's not that they suddenly disappear from the room. It's that over time I've filtered and sorted my friends and people who think differently than I do are just simply never invited into the room at all. Not only that, the fractionation of media and the replacement of communities based on geography with communities based on interests ("town" verus "tribe") has greatly exacerbated this problem (and yes, I do think it's a problem). I've seen studies that show that under an arranged marriage regime, marriages are overwhelmingly within the same class (income, education). Once you give couples free choice without parental veto... nothing changes. That's "personalization" of the dating market. So I think personalized search, well-executed, would mirror the social world.
The "problem" that I alluded to is that when I am forced into a community because of geography, I am at least confronted with ideas I don't like. When my "information diet" begins to mirror my "social diet" I lose a tremendous diversity in terms of ideas I'm exposed to. Americans of a certain age know the difference between an entire nation watching Walter Cronkite for the news versus a nation where half read Huffington Post and half listen to Glen Beck. It's sad, but Google is following a broad social trend mirrored in other media.
So I completely agree with what "should" be. We might hope that big media (including Google) would want to confront us with challenging ideas, but that is not what big media has ever wanted. It has wanted to make money. So I would say that all the trends you mention, be they good or ill, were essentially a foregone conclusion the second Google took on shareholders.