Forum Moderators: DixonJones
I find, however, that Urchin shows nearly twice as many visitors for the week as ClickTracks does.
Have any others noticed such wide differences when running the same data through two different programs and, if so, what have you done to determine reliable results?
Using it for a general measurement of traffic to see if there is a trend for your site is about all it's good for. As a measurement against other sites, the comparison would be highly questionable.
Using 2 or more programs is probably just an exercise in frustration.
Why? Because different programs use differing criteria, so can't be effectively compared.
Ideal there should be 1 industry standard so that all site traffic figures worldwide can compare. In my opinion print magazines have these sort of standards. It should give advertisers a real insight in the business.
But I know there are lot's of issues that influence accurate traffic measurement.
One day there will be...
Till that day I use some tools myself. For the quick daily analysis I use the free version of FunnelWeb, for deep analysis of referrer and keyword tracking I use NetTracker and are impressed of the information I can get out of that tool. But indeed, the visitor and pageview information differs...
Of course you run into problems with proxies etc....
We use webtrends and they have an optional method top track visitors by installing an ISAPI filter to set a session variable for each user. We use it and it's very accurate (supposedly).
A true count is not possible outside the realms of iris-scanning or fingerprinting. We have the industry standard requested in Scandinavia (Denmark, Sweden, Norway) - it's browser based with web bugs. I have quite some experience with this, sticky me for info if interested.
/claus
<edit> edited last line </edit>
In my experience cookie based counting is the only way to even get close on uniques and visits. IP address/user agent combos are too inaccurate thanks to AOL and their ilk.
The problem with cookie counting is that robots and spiders usually refuse cookies and if your site is setup to try to issue a cookie for every request that comes from a user who has no cookies, then a Google crawl of your site could in theory generate a cookie for every page it crawls. That is why you need to do a user agent based exclude.
Another problem with cookie counting is that you have to make sure that you issue cookies on every page if a user does not have one. Don't wait until a user gets to the product page to do this because the majority will never make it there.
Anyway my point is that unique counts are very subjective and if two different tools take different measuring approaches to anything listed above then you could see the disparity you are talking about.
Try to really get to know your reporting tools and make sure they are as similar as possible in their methodology. We have done that here and get to within +/- 5% between their counts and that is an acceptable difference for us.
[webmasterworld.com...]
Its from last November, but still relevant. It goes in some detail into what the figures may and may not mean on a log file analysis!
Bottom line really is use one bit of software, and only look at trends