Forum Moderators: Robert Charlton & goodroi
First i want to know is it true? IF yes, how does G distinguish between the user coming for information rather then product or purchasing something.
I can see if someone clicks a listing, then clicks back to Google right away, but if they remain on that site, and click through the various pages, how would Google be able to monitor that without framing the site?
For one of my most watched search terms I have seen my site climb slowly from 11th to 4th over the last month. I don't think my number of backlinks has not changed enough to warrant such a large jump (I do have more than a month ago, but still not as many as some sites I have past in the results).
At the present rate, I should be in the top 10 in 5 or 6 weeks time (where I was before Alegra).
People do spend a considerable time on my site, as our site is quite an authority for our market.
Search engines are aren't stupid - if it did play a role, then imagine how much computer resources would get used at the SE's by (1) and (2)
SE's may track click thrus (Google does periodically), but that has nothing to do with rankings and is to do with statistical testing of the relevancy of the results.
[edited by: cbpayne at 10:35 pm (utc) on Feb. 19, 2005]
Well said ... and, if true, proves the point unless my logic is flawed.
What you say is that click throughs are tracked to determine quality of SERPs - no need to do this if you dont want to tweak SERPs based on the results.
So, by logic, yes, click through tracking is used to determine ranking of SERPs.
Assuming, Google still wants to provide the most relevant SERPs possible ;-)
It is perfectly possible that google could include traffic as part of its algo, although I'm not saying that it does or should. Alexa tries to rank sites in terms of popularity and there's no reason why this could not be factored in as one of hundreds of ingredients for determining "authority" sites for example. You don't know for a fact that they don't unless you are privy to information most of us are not.
In contrast, if you go on a picnic and it rains every time I think we can all agree that it is merely coincidence.
So it cannot track when you click (unless using spyware in G Toolbar, if there were spyware there ;). And G doesn't frame result pages, it would require using the same kind of link, to call framing script before actual URL.
And it's obvious, that if results would be higher for higher traffic, we would have only spam sites in top of SERPS, because there is absolutely no problem to connect via anonymous proxy and send fake clicks, pretending a legitimate browser, even one with G Toolbar. Surely, spammers would launch such a system, if it gave any adventage in SERPS, it's much easier than spamming blogs, anyway.
If G tracked clicking on search results, it would be easy to see it - instead of pure HTML links on search results, there would be tracked links, like [track.google.com...]
(...) So it cannot track when you click
Then what do you call this (code taken from google search results page):
Javascript:
function clk(el,ct,cd) {if(document.images){(new Image()).src="/url?sa=T&ct="+escape(ct)+"&cd="+escape(cd)+" &url="+escape(el.href)+"&ei=s9wYQsKkHMnwwAGisbDNCA";}return true;} Search Result Links:
<a href=... onmousedown="return clk(this,'res',7)">
Depending on what sort of information you are looking for, your actions might be totally different.
If I am price shopping, I might check every one of the top 30 sites. If the snippet tells me the price, I may not bother clicking on that link. The first couple of links I click on, I will probably spend more time on that site, reading about the item. And the last page I open, is almost certainly not the page that I will buy from. I would have the "winning page" open in another tab.
When looking for information, you can have similar experiences. I was looking for the altitude of Mt. Washington one time. The official site came up at the top (pun intended). If it listed the altitude, I coud not find it. I looked for a good 5 minutes.
The next link I clicked on had the answer in H1 at the top of the page. I was off the page in seconds.
There very well could have been a result where I could have found my answer in the snippet and never clicked the link.
On the other hand, it makes a lot of sense to use that information, compined with human review, to do QA on their results.
If a link is never clicked, it is worth looking at the result and the site.
They have the brainpower to come up with a more sophistcated metric, based on traffic patterns, and it would not be unbelievable that Google is already using it.
A click is an obvious sign of user interest.
Besides why would anyone assume that spending more time on a website means that it's intrinsically better or should be higher in the search rankings? Maybe it's just slow, or inefficient to navigate, or is a community/forum site of some kind. In any case, I have never observed any such effect and IMO there is none.
It is perfectly possible that google could include traffic as part of its algo
The higher a page ranks, the more clicks it gets, not the other way around. A page gets clicked mainly because it is ranked highly. It's hard to tell what's inside until you click. I can't see how this data could be used to determine relevancy.
Furthermore, all pages are not created equal so the length of time one spends couldn't possibly be a factor. Some pages have a mountian of content while others are short and sweet. What's appropriate really depends on its purpose. I would think people spend all of 5 seconds on a page to find out the correct time, but spend hours reading the articles on Time magazine's website, while other spend about ten minutes listening to a cut and reading the bio of Morris Day & the Time.
Too many variables for even mighty Google to digest.
That's just it ogletree. We wouldn't all agree that it was ants that cause rain. We would look at all the possible factors, ants included. Indeed our knowledge of the world would tell us it wasn't ants.
Every argument presented to say how "ridiculous" it would be to include website popularity in an algo could be applied to links, keywords, metatags, anything. And that's why spammers often DO dominate SERPS.
Do I think popularity is a big part of the algo? Not at all. Can I state as fact that it isn't? No. Is it possible that it could play a part, at least in determining authority sites for example? Yes it could.
Now a question of all those who doesn't support this.
If google ranking really not need to do with it, What is the use to those referrals and Why do google use it oftenly.
I have never seen google use any type of tracking urls when google cookies are blocked. What it looks like is that sometimes google will track user behavior, not generic click throughs.
Yahoo and MSN however do appear to track everything all the time.
Want to find out if your hyposthesis is valid? Put up a normal page, get it indexed in the SERPS and then start clicking through to it - waiting - then going back - then click through again..
They may also use toolbar data to see how long someone stays on a site. I think with enough information ( clicks, toolbar data, page2...100 views, conversions from adwords) they will be able to make some informed decisions about how to use that data in the ranking.
When asked, both on and off the record, they have always said that this is for their Quality Assurance team.
There is nothing stopping "Quality Assurance" being an automated part of the ranking process.