TheOptimizationIdiot - 4:57 am on Mar 18, 2013 (gmt 0)
Ahhhh... Tabbed browsing and I might get to argue with Leosghost LOL
If I had 15 years to develop an algo by myself, I think I could account for that, even without teams of programmers. How?
Through personalization (of results of course, which means must I track users and their behavior) and if I know user N1524389 usually clicks on an average of 5.3 results within N seconds and for some reason on query Y they only clicked on 4 results within N seconds, I can determine there's a variance.
Then I can look and see if there's a "confirmation" with the "normal users" whom do not open multiple tabs (click on N results within a relatively low N of seconds consistently over a large number of result sets) and it's likely I could determine which of the results not clicked were deemed by user N1524389 as "least important" within the result set (say they click 1,2,3,5,6 or 1,2,3,5,7 and 1,2,3,4 on average, but this time they clicked 1,3,4,5 and skipped 2 which is a usual click) based on average result number click, click time and number of clicks made within a result set by user N1524389 I could likely determine what the "variant non-click" was.
I could likely especially do so if I compared those clicks to clicks of "normal users" who would have a different click or time between click pattern.
I could even throw user N1524389 out of the "lack of click scoring" and just use their clicks on the 4 results they visited as a positive if I needed to.
And of course, I would browser sniff and associate by IP/Location, so clearing those cookies without changing IP and browsers and possibly even location via IP would not do any good, because I could associate a user to a query based on other variables even without the cookies that make it easier and more reliable.