TheMadScientist - 8:06 am on Dec 15, 2012 (gmt 0)
Am I really still up posting? Gotta sleep, but...
If the JS is getting bad info, so is the log file, because if you're spoofing what you're doing or IP Address or whatever to the JS, you're spoofing it for the request to the server too.
They're actually based on the same info from the browser, except for browsers not running JS, but we're not concerned with those right now, because we know the 'zombie traffic' triggers JS events, so it's actually the most 'narrowed down' way to extract the most information from them, because more info is available via JS than is available to the server via request and if you tried to just go by server logs you would not get time on page or know which visitors are triggering JS events and which are not, so all you would see are bounces and have no idea what's really going on or how long a visitor was on the single page they visited for, because there's not an exit time sent to the server for it to log like you can get from JS, so all the server logs would tell you is there's bounces. (IOW: Server logs, in this case, would not answer anything near what JS will and the actual logs would likely send you on a wild-goose-chase.)
And, since we're being a bit 'nit picky', but not meaning to too much, they're technically both 'scripts'. One is written in a server-side language which writes the default info sent to the server by the browser at the time of the initial request to a log file. The other is written in a browser-side language which gets the same default info (and then some) from the same browser as made the request to the server and sends the info to a server-side script to be processed.