I've heard about this. Another SE used to do this as well. How would Google be able to tell how long a person stayed on a site after clicking through? Seems they'd have to somehow frame the site, or monitor if/when they return to Google.
I can see if someone clicks a listing, then clicks back to Google right away, but if they remain on that site, and click through the various pages, how would Google be able to monitor that without framing the site?
I have seen results that would anecdotally indicate this may be true.
For one of my most watched search terms I have seen my site climb slowly from 11th to 4th over the last month. I don't think my number of backlinks has not changed enough to warrant such a large jump (I do have more than a month ago, but still not as many as some sites I have past in the results).
Interesting. I have done nothing since Allegra and I have moved from 110 to 81 to 63 on each of the last two Saturdays for my main keyword.
At the present rate, I should be in the top 10 in 5 or 6 weeks time (where I was before Alegra).
People do spend a considerable time on my site, as our site is quite an authority for our market.
Traffic has nothing to do with how you rank. Thatís like saying that every time I go on a picnic it rains so going on picnics causes it to rain.
Whether traffic has anything to do with ranking or not, it certainly isn't the same as your picnic / rain scenario.
If taffic had anything to do with, then we would all be spending our time:
1) Clicking on our own results (we don't)
2) Use scripts that click on our own results that do that (there are no scripts available)
Search engines are aren't stupid - if it did play a role, then imagine how much computer resources would get used at the SE's by (1) and (2)
SE's may track click thrus (Google does periodically), but that has nothing to do with rankings and is to do with statistical testing of the relevancy of the results.
[edited by: cbpayne at 10:35 pm (utc) on Feb. 19, 2005]
Saying that you have proof does have everything to with my example. If you say you site was not busy and did not rank well then saying that my site got more traffic then started doing well in the search engine is the same thing.
"SE's may track click thrus (Google does periodically), but that has nothing to do with rankings and is to do with statistical testing of the relevancy of the results"
Well said ... and, if true, proves the point unless my logic is flawed.
What you say is that click throughs are tracked to determine quality of SERPs - no need to do this if you dont want to tweak SERPs based on the results.
So, by logic, yes, click through tracking is used to determine ranking of SERPs.
Assuming, Google still wants to provide the most relevant SERPs possible ;-)
In any case, click-through rate does influence ranking
of google adwords entries. Higher CT indicates higher
relevancy (and more income for google) there.
Higher CT indicates higher
relevancy (and more income for google) there.
Only for Adwords. CTR has no effect on the organic SERPS.
ogletree, it is your logic that is flawed.
It is perfectly possible that google could include traffic as part of its algo, although I'm not saying that it does or should. Alexa tries to rank sites in terms of popularity and there's no reason why this could not be factored in as one of hundreds of ingredients for determining "authority" sites for example. You don't know for a fact that they don't unless you are privy to information most of us are not.
In contrast, if you go on a picnic and it rains every time I think we can all agree that it is merely coincidence.
If G tracked clicking on search results, it would be easy to see it - instead of pure HTML links on search results, there would be tracked links, like [track.google.com...]
It happens on G search results, but only from time to time. There are many SE that have always this kind of links, but not G.
So it cannot track when you click (unless using spyware in G Toolbar, if there were spyware there ;). And G doesn't frame result pages, it would require using the same kind of link, to call framing script before actual URL.
And it's obvious, that if results would be higher for higher traffic, we would have only spam sites in top of SERPS, because there is absolutely no problem to connect via anonymous proxy and send fake clicks, pretending a legitimate browser, even one with G Toolbar. Surely, spammers would launch such a system, if it gave any adventage in SERPS, it's much easier than spamming blogs, anyway.
If indeed it rained every time I went on a picnic for 1 year and the same thing happend to a million other people and could be replacated at will then yes we would agree that ants cause rain. That is the same thing that would have to happen for me to agree that traffic is a factor in se results. My example goes with what was said one person or a few made an assumption on a very small ammount of data.
|If G tracked clicking on search results, it would be easy to see it - instead of pure HTML links on search results, there would be tracked links, like [track.google.com...] |
(...) So it cannot track when you click
Then what do you call this (code taken from google search results page):
Search Result Links: <a href=... onmousedown="return clk(this,'res',7)">
There are so many problems with using traffic patterns for ranking, that it is beyond ridiculous.
Depending on what sort of information you are looking for, your actions might be totally different.
If I am price shopping, I might check every one of the top 30 sites. If the snippet tells me the price, I may not bother clicking on that link. The first couple of links I click on, I will probably spend more time on that site, reading about the item. And the last page I open, is almost certainly not the page that I will buy from. I would have the "winning page" open in another tab.
When looking for information, you can have similar experiences. I was looking for the altitude of Mt. Washington one time. The official site came up at the top (pun intended). If it listed the altitude, I coud not find it. I looked for a good 5 minutes.
The next link I clicked on had the answer in H1 at the top of the page. I was off the page in seconds.
There very well could have been a result where I could have found my answer in the snippet and never clicked the link.
On the other hand, it makes a lot of sense to use that information, compined with human review, to do QA on their results.
If a link is never clicked, it is worth looking at the result and the site.
Fact is Google considers CTR a good measure of relevance, which is why they use it to prioritize AdWords ads. Every one of your objections could also be used in the context of AdWords, but... they use it anyhow.
They have the brainpower to come up with a more sophistcated metric, based on traffic patterns, and it would not be unbelievable that Google is already using it.
A click is an obvious sign of user interest.
I can see how they might use click through rate but how could they measure how long a visitor stays on the site. That seems to be stretching it a bit.
Google does not measure who is clicking on what links except in rare circumstances, so that almost certainly does not enter into their algo at all.
Besides why would anyone assume that spending more time on a website means that it's intrinsically better or should be higher in the search rankings? Maybe it's just slow, or inefficient to navigate, or is a community/forum site of some kind. In any case, I have never observed any such effect and IMO there is none.
|Google does not measure who is clicking |
How do you know for sure?
The question is not is Google measuring clicks (they are) The question is what are they doing with that information.
|It is perfectly possible that google could include traffic as part of its algo |
The higher a page ranks, the more clicks it gets, not the other way around. A page gets clicked mainly because it is ranked highly. It's hard to tell what's inside until you click. I can't see how this data could be used to determine relevancy.
Furthermore, all pages are not created equal so the length of time one spends couldn't possibly be a factor. Some pages have a mountian of content while others are short and sweet. What's appropriate really depends on its purpose. I would think people spend all of 5 seconds on a page to find out the correct time, but spend hours reading the articles on Time magazine's website, while other spend about ten minutes listening to a cut and reading the bio of Morris Day & the Time.
Too many variables for even mighty Google to digest.
It's possible to know for sure because you can do a "view source" and see that the links in the search results are plain, untrackable HREFs.
I bet google knows exactly what the statistical distribution is for the average #1, #2, #3, .. #1000 result. What they would be looking for would be a #3 that is getting a higher CTR than it statistically should (or lower than it should).
jomax: not always. i have every now and then seen google display URLs that are not just ordinary href's. it's certainly an ordinary href most of the time though.
"If indeed it rained every time I went on a picnic for 1 year and the same thing happend to a million other people and could be replacated at will then yes we would agree that ants cause rain."
That's just it ogletree. We wouldn't all agree that it was ants that cause rain. We would look at all the possible factors, ants included. Indeed our knowledge of the world would tell us it wasn't ants.
Every argument presented to say how "ridiculous" it would be to include website popularity in an algo could be applied to links, keywords, metatags, anything. And that's why spammers often DO dominate SERPS.
Do I think popularity is a big part of the algo? Not at all. Can I state as fact that it isn't? No. Is it possible that it could play a part, at least in determining authority sites for example? Yes it could.
Google sometimes (not oftenly)use referrals in the listing to link the sites. That helps in downward the possibility of deliberately influencing the result impact.
Now a question of all those who doesn't support this.
If google ranking really not need to do with it, What is the use to those referrals and Why do google use it oftenly.
"i have every now and then seen google display URLs that are not just ordinary href's. it's certainly an ordinary href most of the time though."
I have never seen google use any type of tracking urls when google cookies are blocked. What it looks like is that sometimes google will track user behavior, not generic click throughs.
Yahoo and MSN however do appear to track everything all the time.
G sometimes shows a referrer URL in the SERPS. When asked, both on and off the record, they have always said that this is for their Quality Assurance team.
Want to find out if your hyposthesis is valid? Put up a normal page, get it indexed in the SERPS and then start clicking through to it - waiting - then going back - then click through again..
They may also use toolbar data to see how long someone stays on a site. I think with enough information ( clicks, toolbar data, page2...100 views, conversions from adwords) they will be able to make some informed decisions about how to use that data in the ranking.
|When asked, both on and off the record, they have always said that this is for their Quality Assurance team. |
There is nothing stopping "Quality Assurance" being an automated part of the ranking process.
| This 49 message thread spans 2 pages: 49 (  2 ) > > |