Welcome to WebmasterWorld Guest from 188.8.131.52
I've seen an earlier thread re: log analysis but it seemed to be centered on production servers. What I want is to be disconnected from the web while I analyze these files. Suggestions?
Also, some log analyzers try to access the target site and check pages or images or other stuff - that would also require an internet connection.
You don't need it for anything else a log analyzer might do.
NetTracker worked like a charm - downloaded my log files and am now working with them off-line. WebTrends on the otherhand...problems with install. Will have to try again in the morning.
Does Analog show referrals by search engine without having to define search engine URLs to it. e.g. does one have to list the search engines in a file that it uses, say "google.com, google.ca, google.xyz etc..
Similar question for spiders. Does one have to list all common spiders and spider sources in order to know spidering details.
Last question; Does it show "time spent" on web pages.
But seriously, it allows me to follow even one person through the site, to see what they visited in the order they visited it. Good way to follow Googlebot and learn how well I did. Couldn't find similar in WebTrends - unless I'm not looking in the right place. I'm using NetTracker Pro and WebTrends Reporting Center.
Cacheing is the main culprit. We did look at this many months ago and concluded that the validity of the data was very suspect, not matter what precautions you took, though the graphs sure looked "pretty" for clients!
Using sessions and click tracking via scripts seems to hold the most promise for tracking paths better but of course does have its downside as well, especially in page loading times and resources required.
Does Analog show referrals by search engine without having to define search engine URLs to it.There's a guy who who does this for you so all you have to do is download a text file when it's updated (Israel Hanukoglu's SearchQuery.txt [science.co.il] file) I've made a few contributions to his list...it's pretty comprehensive and updated at least once a month...
Does one have to list all common spiders and spider sources in order to know spidering details.No, it will simply tell you what spider have been visiting you...it's up to you to find out what they are. There are some good ROBOTINCLUDE suggestions in this thread [webmasterworld.com]...and I think there are a few more here I can't immediately find.
Does it show "time spent" on web pagesThere may be a way to do this, but I haven't used it.
If the webserver knows that someone on IP A visits index.htm, and 2 seconds later someone on IP A visits page1.htm I am under the impression that the software assumes it is the same person and considers that a gossamer trail. NetTracker does allow me to set a session timeout length so if I choose this I expect it will change results somewhat.
With regards to caching - hmmmm I hadn't thought of that but that's not that important to me. If they use cached pages - no big deal, it means they've visited the site at least once. My pages do have a cache timeout (not perfect I know) and my content is more informational than saleable.
[edited by: lorax at 4:04 pm (utc) on Jan. 19, 2003]
which seemed to work. Set up a number of .cfg files and each tweaked for a particular site and try that.
I have several websites running on my hosts computer...How do you use Analog to monitor multiple sites?
I just figured this out. As Woz suggessted, I set up a number of cfg files. However to keep Analog from still using the main Analog.cfg file I created shortcuts--I'm using Windows, but you can easily do this in anything as long as you can get to a command prompt--for each client website.
In the command line for each short cut I use the -G flag that basically tells Analog not to look at its default config file.
Then, say this particular client is Widget Bazaar and I created a specific widgebaz.cfg file, after the -G flag I added +gwidgebaz.cfg which basically tells Analog what config file to use (or add to the default config file if you didn't include the -G flag).
I created windows sortcuts for eaching client, using the above format, pointing to the main Analog.exe file so that I don't have to type the appropriate commands at a prompt each time I want to run a specific analysis.
Hope that's not too long-winded.
If the webserver knows that someone on IP A visits index.htm, and 2 seconds later someone on IP A visits page1.htm I am under the impression that the software assumes it is the same person and considers that a gossamer trail.
Just beware that the software needs to make many assumptions so for that reason what you're getting is not hard data but rather just a guestimate.
I've noticed that the IP address of AOL users can change with every request. Also there can be many people accessing your site from behind a router so they all will show the same IP. And defining how long someone spent on a page is very suspect - they could of just walked away from the computer or answered the phone.
NetTracker looks nice, btw. Thanks for the tip :)
there are way more important issues
Which is one of the reasons I went with NetTracker. I can create my own reports and filters as well as modify the underlying criteria the program uses to analyze the data with. So rather than relying on NetTracker to make the decisions about what is considered a repeat visitor, I can tell it what I consider a repeat visitor.
...oops, just read the charter about no specific software discussions, and deleted a comparison of price and features. A Google search on 'Summary log analysis' will enable you to make your own comparison.
Well generally these record only actual page views, so there is much less data to analyse afterwards. It also means you dont have to do as much 'cleaning' of the data.
But this approach can be very limiting as well, especially if you are using images to track newsletters/adverts etc , you have missed out on a lot of potential data.
Anyway, the web analytics tutorial that comes with their software is also available on the site: search Google for 'web analytics tutorial'. Of course it's based on the capabilities of Summary, but it still covers what you can look for in log files:
- how many visitors, and when
- where visitors come from
- what search engines and search phrases are sending the best visitors (ie the ones who subscribe, spend money etc)
- what visitors do at your site, for instance...
- where they enter and leave from, where they spend longest, which links they follow...
- what browsers they use - etc, etc
And suggests what you might *do* with all this information, which I found very useful.
Also there's a section on how accurate it all is!