Forum Moderators: open
Just looking for rules-of-thumb as to click rates from results pages (regardless of industry, relevance, etc.) if they even exist. Thanks for any info or insight!
Noone is 100% sure but all can guess what might be there, as far I think, there is obvious some weightage for the click through the SERPs,
Say you are on number 1 page of SERPs you will have less weightage for the click compare to the one in second page. Also what matters is how long does a user stay on the page, this will also help to guess the usefulness.
But unfortunetly no one measure, the sure tricks are title, keyword density, anchor text e.t.c
Aji
I remember reading the first position gets 3 times as many clicks as the second (dont know if this is too accurate). But if this IS the case, imagine the difference in click volume when you go from page one to page two.
Because my daily budget was high, my ad was displayed with every search.
My ad averaged 2100 impressions/day, (searches), on the first page and 1300 impressions/day while on the second page.
So, during this particular test periord and for *my* keywords, two thirds of the searchers, (for whatever reason), did not go beyond the first page.
I offer this not as proof, but only as evidence to the common marketing statement "60-70% of searchers find what they are looking for on the first page of results".
Bompa
I did ask my computer, and he came back to me after 19 minutes and said the answer would be forty-two.
Seriously: Its an interesting question. I ran an quick and dirty analysis on a log file of mine. I parsed the referrer against www.google.com, and was looking for the start=? and num=? tags. When there is no num tag, then I assume a serp-size of 10. I assume (falsely) that the referring link is in the middle of page. A link from the first page (going to 10) would
result as a position 10. Which is probably not true. My site is non competitive. Lost of 'left-over-traffic'. That also may
have an effect on the data.
Here the data, based on the last 505559 referer's that matched www.google.com:
resulting page max number, Percentage:
10 76.7
20 11.64
30 4.58
40 2.15
50 1.16
60 0.63
70 0.39
80 0.56
90 0.24
100 0.3
110 0.13
120 0.09
130 0.08
140 0.06
150 0.65
160 0.04
170 0.04
180 0.04
190 0.05
200 0.03
210 0.02
220 0.02
230 0.02
240 0.01
250 0.02
260 0.01
270 0.01
280 0.01
290 0.01
As an aside:
I also noted how many people have changed their serp result size. Not surprising the bulk runs with the default of 10.
Google takes more time and therefor CPU time and therefor operating costs the deeper the search goes. I think
that google would save thousands of dollars when they would change the default number of searches returned from 10 to 8.
Roughly interpolating the curve that we see here, it looks that only very very few people hit on result 9 and 10.
Nobody would miss it, and one nuclear power plant could go offline. GoogleGuy: please send me the check, or maybe just +2 my PR. Hey maybe just give me the AdSense money google kept, after I got kicked out of the program.
Size of result page:
Page-num Percentage:
10 98.82
20 0.18
30 0.13
50 0.23
100 0.6
Here the perl code (UPPERCASE YOU MIGHT NEED TO EDIT)
#!/PATH_TO_YOUR_PERL
open (IN, "PATH_TO_YOUR_APPACHE_LOG") or die "unable to open logfile";
while (my $l = <IN>){
$l =~ /([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) - - \[(.*) -0[0-9]00\] \"([^"]*)\" ([0-9]+) ([\-0-9]+) \"([^"]*)\" \"([^"]*)\"/;
my $address = $1;
my $time = $2;
my $request = $3;
my $state = $4;
my $size = $5;
my $referer = $6;
my $agent = $7;
my $out = 0;
if ($request =~ /^HEAD /){
next;
}
$request =~ s/^GET //;
$request =~ s/HTTP\/1\.[01]//;
#print "$referer\n";
if ( $referer =~ /www\.google\.com/){
$referer =~ /num=([0-9]+)\&/;
$num = $1;
if (! $num){
$num = 10;
}
$referer =~ /start=([0-9]+)\&/;
$start = $1 + ($num / 2) ;
#print "n: $num s: $start $referer\n";
$scoll{$start} ++;
$ncoll{$num} ++;
$tnum ++;
}
}
print "start: $tnum\n";
$step = 10;
$curstep = 0;
$cura = 0;
foreach $s ( sort numerically keys %scoll){
if ($s > $curstep){
$perz = int ($cura / $tnum * 10000) ;
$perz = $perz / 100;
print "$curstep $perz\n";
$cura = 0;
$curstep += $step;
}
#print "$s $scoll{$s}\n";
$cura += $scoll{$s};
}
print "size of returned pages\n";
foreach $s ( sort numerically keys %ncoll){
$x = $ncoll{$s};
$perz = int ($x / $tnum * 10000) ;
$perz = $perz / 100;
print "$s $perz\n";
}
sub numerically{
$a <=> $b;
}
#edited, and then not tested, hope it still works ...