Forum Moderators: DixonJones
The problem you're running into is that most 'hit counters' work by using javascript when the user-agent supports it, or by causing the user-agent to fetch an image. Most spiders don't process (i.e. execute) javascript, and in most cases, they also don't fetch in-line images when spidering a page for text content and links. Since you may not have logs to review where this pattern is obvious, I'll just pass along that on a spidering run, the robots fetch almost all of my .html pages, and none of my images - it's 'signature' behaviour of a crawl.
As for free solutions, I'm not personally aware of any, because the most common way to get stats is to have them collected server-side. If your hosting service doesn't provide at least raw server logs, then it may be time to consider what the extra money of a 'full-service' web host will buy you.
An alternative - *if* your host allows/supports it - is to code your own, maybe using PERL, just for example. Not that I'd recommend trying to code your own full-feature stats package, but a simple hit-counter-type script based on HTTP_USER_AGENT is feasible. You'd need support for SSI (Server-Side-Includes) and user-created PERL scripts at a minimum.
Jim