Forum Moderators: open
The JavaScript function
function clk(n,el) {
if(document.images){
(new Image()).src="/url?sa=T&start="+n+"&url="+escape(el.href);
}
return true;}
How it works
The listings' urls within the serps look like this:
<a href="http:*//www.webmasterworld.com/forum3/" onmousedown="return clk(2,this)">Google News</a>
So after you click on any listing, a invisible image gets loaded:
http:*//google.com/url?sa=T&start=2&url=http ://www.webmasterworld.com/forum3/
The web bean (image) url together with its referer allows full tracking.
The storm is not even a slow fart. Click tracking [google.com] is a well known method and Google's usage of the traditional tracking (redirects) has been honestly commented by GoogleGuy many times. Wether you see it or wether it's hidden, it's not a privacy issue imho. (Hm, shouldn't have mentioned it's hidden.)
I was today discussing how tracking visitors sent to a clients distributors seems hard if I want to present a straight hyperlink but I guess this might be one way to track assuming js is on.
Does it at all stay resident or just log the initial movement away from the start site on the hyperlink?
I would not be able to use if it was at all resident after the initial click.
Otherwise the only easy ways I can see to track outgoings involve taking the clicker an indirect way from the link to the site they want to arrive at to register in "our logs" and that almost certainly affects popularity which we want to give to our distributors.
For example, try this:
<script type="text/javascript">
function clk(n,el) {
alert("/url?sa=T&start="+n+"&url="+escape(el.href));
return true;
}
</script>
<a href="some_page_here" onmousedown="clk(n, el)">text</a>
You'll get an alert when clicking that link, and since its an onmousedown event, the alert is given even if the user doesn't follow the link by dragging the mouse away from the link. This way you can tell ifa user almost clicked the link, but changed their mind.
The nice thing about this method is that it works with non-javascript browsers just as well.
Once you leave google, the tracking is done. Actually just the clicks on the result listings at google are tracked.
>once they leave Google can they still be tracked
Sure not since it doesn't use cookies. No magic just a simple js trick to track which result(s) have been clicked for what search (assuming they track the referer of the image request too).
I'm at home now and guess what - i don't see the script anymore. In the office i saw it for a few hours with every search i did. Either GoogleGuy woke up and removed it ÷) or it's like the usual sporadic tests they did in the past but more sophisticated and is NOT (yet) default. Maybe they decided to do their tracking tests hidden instead of using visible redirects to avoid the multiple threads at WebmasterWorld that started whenever they ran their (visible) tracking tests in the past. :)
tracking (redirects) has been honestly commented by GoogleGuy many times
Hey, *I* have no problem with anonymous click tracking, and it's covered in Google's privacy policy ('Google may choose to exhibit its search results in the form of a "URL redirecter".') But only a tiny fraction of users read the privacy policy or GoogleGuy's answers to questions on Webmasterworld. The tech press or the Slashdot crowd may take a dimmer view of things when they find out second-hand, after-the-fact as usual, that their clicks have been tracked all this time.
Almost every search engine has experimented with this (indeed some like DirectHit based their entire ranking algorithm on click data.) There's never been any PR fallout that I can remember. But Google's held to a higher standard since it's used by nearly everyone, so this could play out differently.
hehehe, no worries - my comment was just meant as a sign to avoid turning this topic into a privacy discussion.
>The tech press or the Slashdot crowd
... and don't forget the NYT [webmasterworld.com]. A Google rumour or rant is always good for a phenomenal copy sales increasing story these days. :)
For example: let's say people were clicking on results 3, 6, and 8 for a given query more than the others. That might tell you something about 3, 6, and 8, but also about at least 1, 2, 4, 5 and 7 as well, (perhaps the ones clicked did a better job answering the query and the others did less well). And if you track abandoned pages by finding users that come back after a short time and pick another result, you might also conclude that the one they picked didn't help them.
There's no simple conclusion to be made about a given page, but in the aggregate, people are expressing their opinions in how they respond to the site. And seeing how things change after a new update might give G a good way to evaluate the quality of that update. This is just one more way for people to anonymously and independenly express their opinions about a site. I think it's great that G is using it!
I SERIOUSLY doubt this. For one, Google just started doing this.
>It's hard to measure popularity with links nowadays so why not measure popularity by the actual number of visitors a site gets?
Problem with that is it doesn't tell much about if the person liked it. Basically all the searcher has to go by before they click is the displayed title and the snippet. I may get a lot of clicks because I know how to write sensationalistic titles, and when people get to my site they they think it stinks. Hype filled page titles does NOT make a good page. I've got a site where the home page title is like "Brandname pharmaceutical is dangerous, and may cause death!" With an actual "!" in the title. It so happens I am actually beating out the trademark holder for the #1 spot, and I do get a lot of traffic. However, with that sensational page title I'd likely do well at the #5 spot on the SERP. I'd think things like link popularity is a better ranking criteria than how well I can write page titles.
Having 500 links to a website doesn't tell you anything about how much the site is liked either. It simply means some meathead has spent a lot of time requesting and exchanging links.
I know of a website that has recently been in the news and has since shot up the serps without having had anything changed to it, possibly because the visitor numbers have increased from recent exposure.
Google used to think that if a website has 500 links to it, it must be good. This is no longer the case due to link spam. Perhaps google rightly now starts to think, if 500 people visit this site every minute, it must be better than the site with 500 links and only a few visitors.
I wish they would send the position information in the page along with the search term, it will be very nice for us webmaster ...
/BP
and it would save them a lot of their bandwidth -
and they could probably even charge us for it. Imagine you get this:
blue widget - no filter - 7 out of 2333445
blue widget - UK/Japanese - 1 out of 27
I would pay them 9.95 per year right away :)
Sure not since it doesn't use cookies. No magic just a simple js trick to track which result(s) have been clicked for what search (assuming they track the referer of the image request too).
Whether they record the unique Google ID in the cookie is another question, but in fact, if a Google cookie already exists in that browser -- and unless you have cookies disabled you will have a Google cookie when this takes place -- the browser will indeed offer up the cookie even if all you are requesting is a one-pixel clear GIF.
I just tried it on Explorer 5.5. I gave myself a cookie from another site, I cleared out all caches except for that cookie, I put Explorer on a proxy, and then I requested a single GIF from that same cookie site. The proxy revealed that the cookie was offered to the site by Explorer.
A GIF request will also contain the referrer, as you mentioned earlier. If you have other information that can be snatched by Javascript, you can also pass this in either the PATH_INFO or the QUERY_STRING. All this can happen with just the browser's request for the clear GIF.
If users click on only two or three results it should (hopefully) indicate that the results are better than if the user clicks on six or seven results or even keeps scrolling down further.
The issue gets more complicated when the user decides to refine the search but the basic concept is sound. However, unless you follow the activity of the user way beyond the initial page with an incredible AI program, it is virtually impossible to read greater significance into click-throughs than this.
So, if google are simply attempting to guage the quality of their results they should be successful. If however, they plan to incorporate click-through data directly into their algos, they'll simply be creating a new method by which spammers can flourish (by faking click-throughs which is trivially simple).
Kaled.