Welcome to WebmasterWorld Guest from 22.214.171.124
I've checked the target domain for a robots.txt which would stop robots from attempting to index the linked page and there is not a robots.txt file. I'm putting 2 and 2 together and deducing that the high click-thru is accounted for by se robots. To counter this I plan to write a cgi which logs then redirects, and to locate the cgi in a location which disallows robots.
Does anyone employ a similar technique? Any pitfalls that you've experienced which you would be willing to share? Does it sound feasible that robots are accounting for the high click-thru?