Forum Moderators: open
Theory 1: is that Google is using clickthrough data with their recently added clickthrough tracker
On an actual search for 'widgets' the second natural search result is for widgets.com which if you look at the coding has this link:"<a href=http://www.widgets.com/ onmousedown="return clk(2,this)">www.widgets.com/</a>"
and then in the head tags it has this code:
<script>
<!--
function ss(w){window.status=w;return true;}
function cs(){window.status='';}
function clk(n,el) {if(document.images){(new Image()).src="/url?sa=T&start="+n+"&url="+escape(el.href);}return true;}
//-->
</script>which give the link to widgets.com a Google redirect link that is hidden from the user.
and the clickthrough data is uploaded to affect the results on a randomly-timed schedule.
Theory 2: is that Google is using several algorithms all tweaked in different ways and each is used randomly throughout the month and the data tracked. This gives Google the opportunity to examine several seperate 'tweaks' at once rather than between updates.
Theory 3: is the first theory that came around during Florida that Freshbot has been given more authority. With crawling driven by PR and inbound links, data collected by Freshbot is added and used to rank the fresh pages.
I personally would go for Theory 2 as my research (across 52 sites and over 900,000 pages) shows fluctuations in results despite estimated traffic for keywords and stale results. Most of the sites I have access to are commercial with specific targeted keywords and all are naturally optimized. This means that the results I am basing this on are mainly commercial and all the top 300 or so sites for the keywords researched are commercial and all are optimized. Plus, the clickthrough tracker was only recently made a permanent feature on Google.com and these fluctuations have been going on for some time.
Comments? I'd especially like to hear from non-commercial sites who will be researching results with non-optimized websites. And - does anyone know if there is any relevance in the fact that site:example.com results fluctuate along with the regular fluctuations?
Google has randomly click tracked since 1999:
[google.com...]
For example, the thought of my wife or my wife's mom downloading and installing the toolbar.. phew!
I suspect 75% of the net neophytes that use the internet do not even know they are using a webbrowser.
My guess is that it is a deeply biased data source and does not generalize very well.
And I like the theory is that they are using click thru tracking. Plus, if you click back and click on another link in a measured amount of time, this is how Google will determine the degree of dissatisfaction you had about finding your results.
Toolbar and SERP tracking are usefulto check results, but useless to generate results.
As for tracking in the SERPs, that was thoroughly debunked in the other thread. In fact, I just did a search on hotels in my local big city, and they are still not tracking me.
Your theory 2 has been true for as long as I have had a website. They have always made minor mid-month tweaks to the algo.