Forum Moderators: DixonJones
I need
a) A way to keep a count of how many clicks through each separate link
and
b) A way to ensure i dont 'double charge' for people multi clicking a link (I guess i can get round this with IPs) and more importantly,
how do i ensure i dont count bots spidering the site as click throughs?!
Hope someone can help me with advice or a script!
For us, we exclude the following when counting up visits to our site, but this might not be completely up to date so do a search for a site that regularly maintains a bot UA list.
where user_agent not in ('Googlebot/2.1 (+http://www.googlebot.com/bot.html)',
'ia_archiver',
'Mozilla/2.0 (compatible; Ask Jeeves/Teoma)',
'TurnitinBot/1.5 [turnitin.com...]
'Mozilla/5.0 (Slurp/cat; slurp@inktomi.com; [inktomi.com...]
'FAST-WebCrawler/3.7/FirstPage (atw-crawler at fast dot no;http://fast.no/support/crawler.asp)',
'Mozilla/3.0 (compatible)',
'FAST-WebCrawler/3.6 (atw-crawler at fast dot no; [fast.no...]
'Teleport Pro/1.29',
'Mercator-2.0',
'sitecheck.internetseer.com (For more info see: [sitecheck.internetseer.com)',...]
'Mozilla/3.0 (compatible; Indy Library)',
'larbin_2.6.2 vitalbox1@hotmail.com',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt; DTS Agent',
'KMcrawler',
'FAST-WebCrawler/3.7 (atw-crawler at fast dot no; [fast.no...]
'Mozilla/3.0 (Slurp/si; slurp@inktomi.com; [inktomi.com...]
'libwww-perl/5.64',
'Mozilla/2.0 (compatible; T-H-U-N-D-E-R-S-T-O-N-E)',
'Googlebot-Image/1.0 (+http://www.googlebot.com/bot.html)',
'Libby_1.1/libwww-perl/5.65',
'Lynx/2.8.4rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.6b',
'http://www.almaden.ibm.com/cs/crawler [c01]',
'http://www.almaden.ibm.com/cs/crawler [wf55]')