Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: buckworks
The basic programs works for most sites and are typically priced under 1,000 US. Like:
WebTrends Log Analyzer
If you are using balanced loaded servers or dynamical content, you need more power. Expect to pay about 1,000 - 5,000 US.
WebTrends Enterprice Suit
If you have a high traffic website, you might want to invest in one of the pro packages. Expect these packages to start at around 20,000 US.
CommerceTrends from WebTrends
HitList from Accrue Software www.accrue.com (They used to have a moderate priced product that they recently took off their product list. Now they only sell the flagship)
NetGenisis from [Netanalysis.com]
Traffic analysis is the only way to understand how well your site is doing. Above all you want to know what quality of traffic that different sources send to you.
That way you can determine how to best spend your marketing budget.
Becuase it (and i am sure other simple programs that avoid the glitz) leaves the customization in your hands and allows you to do just about anything to do with a raw log file, playing with the configs and RTFM is very well rewarded. It takes a day of concentrated work, but once it is set up it works well.
Using a simple program like that also forces you to really understand what results you are getting.
I was hoping to try a freebie first to get a feel for the data. Source referer was my initial interest.
Can any one point me to a free download of "analog"
And one of the biggest reasons for this is the way certain providers use proxy servers, AOL being the most obvious.
Does anyone have any methods for dealing with the AOL "problem"? From what I see, when traffic is low, the AOL proxy server hits tend to inflate uniques, since one visitor uses 7 or more IPs. But when traffic gets roaring, I think the AOL proxies deflate the number of uniques.
Some of my clients get a huge percentage of their traffic from AOL, especially since the inclusion of the top GoTo sites. It would be wonderful to get a clearer picture of AOL traffic for them.
I pretty much gave up on untangling this area, but would be very interested in how other people approach it.
Analysis software is fine, but most of it (webtrends, faststates, etal) never give you the smaller picture. I've yet to find any of the big names that I think really do the job. I'm using a mismash of products now. Faststats for the quick view, webtrends for the big picture, and my own product for the 'real view'.
joined:June 27, 2000
I sent you a sticky mail. Don't want to rip on a product in public.
I didn't think this thread would run so long
I honestly don't like any of the software in this feild so far. Its all far to complex and unfriendly for simple small site operators/owners.
Is there anything out there, ultra-user-freindly and simple, that's shareware/freeware ??
I thought it was worth posting here, as he shows the danger of building too much into the statistics and how easy it is to misinterpret and over-interpret the results. There are also links to some earlier docs and his sources.
We use our logs a lot to assess how we are going and to identify referrers. But we have learned not to make claims based on them. And it is always good to look at this page before you publish claims based on your logs....
Some of the more juicy bits... (he explains why he makes each of these "bold" claims separately)
You can't tell the identity of your readers.
You can't tell how many visitors you've had.
You can't tell how many visits you've had.
Cookies don't solve these problems.
You can't follow a person's path through your site.
You often can't tell where they entered your site, or where they found out about you from.
You can't tell how long people spent reading each page.
You can't tell how long people spent on your site.
5. Real data. Of course, the important question is how much difference these theoretical difficulties make. In a recent paper (World Wide Web, 2, 29-45 (1999): PDF 228kb), Peter Pirolli and James Pitkow of Xerox Palo Alto Research Center examined this question using a ten day long logfile from the xerox.com web site. One of their most striking conclusions is that different commonly-used methods can give very different results. For example, when trying to measure the median length of a visit, they got results from 137 seconds to 629 seconds, depending exactly what you count as a new visitor or a new visit. As they were looking at a fixed logfile, they didn't consider the effect of server configuration changes such as refusing caching, which would change the results still more.
Web statistics are still informative: it's just important not to slip from "this page has received 30,000 requests" to "30,000 people have read this page." In some sense these problems are not really new to the web -- they are present just as much in print media too. For example, you only know how many magazines you've sold, not how many people have read them. In print media we have learnt to live with these issues, using the data which are available, and it would be better if we did on the web too, rather than making up spurious numbers.
PS: There also is a clue here as to why the number of AOL referrers could be grossly overestimated by those who just use log statistics without question. Im sure most of us here know why, but it underlies to be wary of claims of how many "unique visitors" you are REALLY getting from AOL.
I work very hard to educate our clients about the relative nature of the statistics we provide. I find that the greatest benefit comes from watching the trends, rather than putting too much faith in any one hard number.
This is a very useful approach, because it orients our partnership towards growth, rather than achievement. The key is choosing the right numbers to watch.
Before I came to web work in 1995, I spent a lot of time in retail, both the supermarket and apparel businesses, and I learned a lot about how to use and mis-use statistics. Even a quarterly financial has a lot of fudge in it when you look close (I call it the Hershey's Constant). And yet, the business is either healthy or not -- maybe growing, maybe maintaining (rare), and maybe dying.
The key is choosing the right numbers to watch, knowing what they represent (and what they don't represent) as precisely as possible, and then watching the trend, rather than the absolute values. And you really don't need to watch more than a handful of stats closely to steer an enterprise well. The rest of them only need an occasional look.
So, with AOL proxy server distortion on the number of uniques, it's not that big a problem if we watch the trend of uniques, rather than try to take the absolute figure to the bank. There is a point beyond which we get no practical return from any more obsessing.
I think ive said it before, but just visually inspecting the logs (as long as you have a small site) gives a good reality check too!
The biggest mistake i feel is people counting spiders, email strippers, internal admin hits, and all the rest that don't really "read a page". I remember one person got really upset when we stripped out the AV/Google/local server/other robots/internal search engine indexes out of their logs and found there were getting less than half what they claimed!
Handle with our care was my learning....
joined:June 27, 2000
Check your sticky mail. (See menu above the blue bar.)
I can understand a slight difference, but this seems to be an extremely large variation. Any ideas on why there's such big difference?
I had the same problem moving from WebTrends to IBM Surfaid. Surfaid was much more strict then webtrends in determining what a visitor was.