Forum Moderators: Robert Charlton & goodroi
Near the end of the talk, someone asked if how much money Google will make is factored into decisions about changes to Google’s (unpaid search algorithms). Singhal was adamant: “no revenue measurement is included in our evaluation of a rankings change.”
2.That algorithm change is run on a test set of data and if all looks good, human raters look at before and after results for a wide set of queries (a kind of manual A/B test). The human raters don’t know which is the before and which is the after. The raters report what percentage of queries got better (more relevant) and what percentage got worse (less relevant).
Quote from article referenced in netmeg's original post:
Near the end of the talk, someone asked if how much money Google will make is factored into decisions about changes to Google’s (unpaid search algorithms). Singhal was adamant: “no revenue measurement is included in our evaluation of a rankings change.”
Leosghost wrote:
If one does not know the starting point ( one does not know which is the "before" and which is the "after" ) one cannot possibly give a value judgment as to whether the result got better or worse..better or worse are relative terms, which require that one knows "in relation to what they previously were"..in other words, to a starting point, or simply , to what they were before..
So either item #2 is untrue, or is nonsense..
That algorithm change is run on a test set of data and if all looks good, human raters look at before and after results for a wide set of queries (a kind of manual A/B test). The human raters don’t know which is the before and which is the after. The raters report which set was better, and hand it back. The markers then declare whether the lab rats prefer coke or new coke.
That algorithm change is run on a test set of data and if all looks good, human raters look at before and after results for a wide set of queries (a kind of manual A/B test). The human raters don’t know which is the before and which is the after. The raters report which set was better, and hand it back. The markers then declare whether the lab rats prefer coke or new coke...there are definitely better ways that they could deal with "did serps get better as a result of us doing X or Y or Z or any combinations of tweaks" ..
At the end of the day, he said, site owners need to take a hard look at what value their sites are providing. What is the additional value the visitor gets from that site beyond just a skeleton answer? Ultimately, it’s those sites that provide that something extra that Google wants to showcase on the first page of search results.
At the end of the day, he said, site owners need to take a hard look at what value their sites are providing. What is the additional value the visitor gets from that site beyond just a skeleton answer? Ultimately, it’s those sites that provide that something extra that Google wants to showcase on the first page of search results.
you can't choose the "better" without a "base" to be "better" than...
2. That algorithm change is run on a test set of data and if all looks good, human raters look at before and after results for a wide set of queries (a kind of manual A/B test). The human raters don’t know which is the before and which is the after. The raters report what percentage of queries got better (more relevant) and what percentage got worse (less relevant).
Human evaluators. Google makes use of evaluators in many countries and languages. These evaluators are carefully trained and are asked to evaluate the quality of search results in several different ways. We sometimes show evaluators whole result sets by themselves or "side by side" with alternatives; in other cases, we show evaluators a single result at a time for a query and ask them to rate its quality along various dimensions.
3. This process gets looped several times as the algorithm is tweaked to better serve results for the queries in the "worse" set.
Singhal was adamant: “no revenue measurement is included in our evaluation of a rankings change.” Listening to him explain how excites he gets about search improvements and how changes are evaluated, you realize there’s no spin here. He’s absolutely telling the truth. And he would know. Chris Sherman asked if anyone at Google really understands how the whole thing works and he replied that while no one knows how everything works (all of unpaid search, AdWords, Android, etc.), he has a pretty good idea of how all of unpaid search works. Not many can make that claim.
Leosghost wrote:
@rlange..I was replying to the contents of your post prior to your editing of it [...]
One amongst many flaws in Google's approach is that Google have no way to be sure ( other than asking the "lab rats" ) if they have ever run a search for the terms whose results they are given to assess and assign a preference to..