Welcome to WebmasterWorld Guest from 18.104.22.168
- So I wrote a similar logging system using PHP and MySQL.
It captures all that missing traffic, but has an unexpected downside. When I compare the PHP logs to the JS logs, I see that the PHP logs miss out all repeated page visits from the same visitor.
I realised that this is probably because those pages are in the client browser's cache - and so no request to my server is made, and so no PHP log entry is made.
Is there a nice way around this?
Thanks for any light you can shed.
There are a couple meta tags that can be useful it preventing client-side caching:
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
These should prevent the browser from caching the page. While it's not foolproof, you should see an improvement in your numbers.
Presumably, the downside to this is that my visitors lose the benefits of caching and see a slower browsing experience, and my server sees more load and bandwidth.
Yeah, I know I want to have my cake and eat it, but you can't blame a guy for trying.
Not all that spooky. It's probably some proxy that's filtering it from the HTTP-Requests, plenty of so called "privacy", "anti-spam", "anti-virus" programs like to do that kind of stuff. Since JS is executed directly in the browser, it does have access to the Referrer.